Skip to content

Incident Response#

Relevant controls: SC.08.01, SC.08.02, PD.01.15


1. Incident Classification#

Severity Description Examples
Critical Confirmed or highly suspected breach of confidentiality, integrity, or availability with significant impact Confirmed credential compromise; confirmed exfiltration of user data; complete service outage affecting all MCPs; audit log tampering detected
High Strong indicators of a security incident; impact not yet confirmed Sustained anomalous API access pattern; OBO auth failures spike beyond normal; unexpected SSM parameter access in CloudTrail; unknown principal assuming IAM role
Medium Isolated anomaly or policy violation without confirmed breach Single-user auth failure spike; unexpected 4xx/5xx rate on one MCP; failed intrusion attempt (blocked by security groups or IAM); Snyk finding in a deployed image
Low Informational event warranting investigation but unlikely to indicate active threat Single anomalous audit log entry; unconfirmed alert; routine security scanner finding

2. Detection#

Incidents may be detected via any of the following channels:

Automated Alerts#

Four CloudWatch alarms are provisioned per MCP and publish to the aiconnectors-alarms-{env} SNS topic (email to AI connectors team). See logging-monitoring.md §7 for alarm definitions and thresholds. Potential incident indicators:

Alarm Potential incident indicator
Auth failures Credential compromise; token replay attack
Error rate Application compromise; dependency failure
HTTP 4xx spike Enumeration attack; broken client integration
HTTP 5xx spike Service disruption; crash under attack

Manual Detection#

Source How to query What to look for
Audit logs (S3) /check-audit-logs Claude skill outcome=error spikes; unexpected users; off-hours activity; high call volumes
CloudWatch Logs AWS Console or aws logs filter-log-events ERROR entries; OBO failure messages; unexpected request paths
AWS CloudTrail AWS Console or Athena Unexpected IAM role assumptions; SSM parameter reads outside startup; S3 DeleteObject attempts
GitHub Security tab GitHub → Security → Code scanning alerts Secret scanning alerts; Snyk findings in deployed code
AWS Security Hub / Config AWS Console Infrastructure compliance drift detected by AWS Security team

External Notification#

  • Global Security Operations (GSO) may proactively notify the team of detected threats: globalsecops@novonordisk.com
  • Azure AD sign-in logs (managed by NN IT) may surface suspicious token activity
  • User reports — a user may report unexpected data access or behaviour from an AI assistant using an MCP tool

3. Escalation Process#

  1. Detect — alert received, anomaly observed, or report received
  2. Assess — determine severity using section 1; verify whether the event is a confirmed or suspected incident
  3. Report — notify the IT Infrastructure Manager and AI connectors team immediately
  4. Escalate — for Medium and above: escalate to globalsecops@novonordisk.com without delay
  5. Contain — take immediate containment action (see section 4)
  6. Preserve evidence — before remediation, snapshot/export all relevant logs (see section 6)
  7. Investigate — determine scope, root cause, and affected users/data
  8. Remediate — apply fixes, rotate credentials, patch vulnerability
  9. Notify data subjects / authority — if personal data is involved, follow GDPR notification requirements (see privacy-gdpr.md §5)
  10. Review — post-incident review within 5 business days; update this document and controls as needed

4. Contacts#

Role Name / Team Contact
Global Security Operations GSO globalsecops@novonordisk.com
IT Infrastructure Owner (fill in — name and email)
IT Infrastructure Manager (fill in — name and email)
Data Protection Officer DPO dataprotection@novonordisk.com
AWS Security team AWS account contacts (fill in — contact via NN IT ServiceNow)
Azure AD / NN IT Identity team (fill in — ServiceNow ticket for emergency token revocation)

5. Containment Actions by Incident Type#

Incident type Immediate containment action
Azure AD client secret compromised Immediately disable the secret in Azure AD (portal — no ServiceNow required); run terragrunt apply in infra/initial/mcps/{name}/ to generate a new secret and update SSM; force ECS redeployment
Compromised user account Coordinate with NN IT to disable the user's Azure AD account; this immediately invalidates all their active tokens (including any cached OBO tokens which expire naturally within ~1 hour)
Unauthorized IAM role use Revoke the role's trust policy or session credentials via AWS IAM; investigate CloudTrail for scope
Suspected data exfiltration via MCP tool Block the specific user's Azure AD account; review audit logs for all their recent tool calls; check the downstream data sources for any downstream effects
Audit log tampering attempt S3 bucket policy denies DeleteObject for all principals — confirm via CloudTrail that no records were deleted; if the bucket policy itself was modified, treat as Critical
Service outage (suspected attack) Scale ECS service to 0 tasks to stop serving traffic while investigating; restore once root cause is confirmed

6. Evidence Preservation#

Before taking any remediation action, preserve all relevant evidence. The AI connectors platform's immutable audit trail provides a strong evidentiary baseline.

Log Sources to Preserve#

Source How to export Notes
Audit logs (S3) Download NDJSON objects from nn-aiconnectors-audit-{env} for the relevant date range Bucket policy denies deletion; records are tamper-resistant by design
CloudWatch Logs Export log group to S3 using aws logs create-export-task Capture the relevant time window before any ECS redeployment clears the running instance
AWS CloudTrail Download event history from the CloudTrail console or S3 trail Covers all AWS API calls; critical for tracking IAM / SSM access
Azure AD sign-in logs Request export from NN IT (Azure AD Portal → Sign-in logs) Shows token issuances and OBO exchanges for the period
GitHub Actions logs Download from the GitHub Actions UI for any relevant workflow runs CI/CD activity at time of incident

Preservation Rules#

  • Do not force-redeploy ECS tasks until CloudWatch logs for the current task have been exported — container replacement destroys the running instance's in-memory state and may overwrite the log buffer
  • Do not modify SSM parameters until the compromised parameter value is documented (for forensic comparison)
  • Do not delete CloudTrail events or CloudWatch log streamsECS task roles do not have this permission anyway, but operators should avoid it during an investigation
  • Label preserved evidence with the incident date, description, and the name of the person who exported it

7. Post-Incident Review#

A post-incident review must be completed within 5 business days of a Medium or higher severity incident. The review output is recorded below and stored in docs/compliance/incident-reviews/.

Date Incident summary Severity Root cause Actions taken Process improvements