Skip to content

Vulnerability and Patch Management#

Relevant controls: SC.04.01, SC.04.02, SC.04.03, SC.04.06, SC.04.09, SC.04.10, SC.04.11, SC.04.16


1. Scope#

Component In scope Managed by Notes
Docker base image (python:3.13-slim) Yes AI connectors team Updated by rebuilding and redeploying via CI/CD
Python application dependencies (uv.lock per MCP) Yes AI connectors team Pinned in lock file; updated via uv lock --upgrade
Shared library (nn-mcp-core) Yes AI connectors team Path dependency; updated with application code
Terraform modules (external ai-lab-infra) Yes AI connectors team Pinned to tag (e.g. v1.0.306); updated by bumping tag
Terraform providers (AWS, Azure AD) Yes AI connectors team Pinned in versions.tf; updated manually
GitHub Actions pinned SHAs Yes AI connectors team Actions pinned to commit SHA in workflows
AWS managed services (ECS, ALB, DynamoDB, S3) Partial AWS AWS patches underlying infrastructure; no action required for managed services
GitHub Actions runner images No GitHub (managed) AWS-hosted runners; patched by GitHub

2. Vulnerability Assessment Process#

Vulnerabilities are identified through several complementary mechanisms:

2.1 Snyk (CI — Pull Request scanning)#

The .github/workflows/security.yml workflow runs automatically on every pull request to main. Four scan types run per changed MCP:

Scan What it covers Threshold
Snyk Code (SAST) Application source code (src/) — injection, insecure patterns High/Critical
Snyk Open Source (SCA) Python dependencies via CycloneDX SBOM from uv.lock High/Critical
Snyk Container Docker image built locally — OS packages and layers High/Critical
Snyk IaC Terraform/Terragrunt configuration in infra/ High/Critical

Results are uploaded as SARIF to the GitHub Security tab and posted as PR comments. Findings at or above threshold do not automatically block merge but are flagged for developer review before merging.

2.2 GitHub Advanced Security#

  • Secret scanning and push protection — see secrets-management.md §4
  • Dependency graph — GitHub tracks known CVEs against declared dependencies and surfaces alerts in the Security tab

2.3 AWS Account-Level Scanning#

The AWS Security team runs continuous compliance and vulnerability checks across both dev and prod accounts using: - AWS Config — evaluates deployed resources against NN IT security baseline rules (encryption, public access, logging) - AWS Security Hub — aggregates findings from Config and other sources; monitored by the AWS Security team centrally

Findings from AWS-level scanning are communicated to the AI connectors team via the AWS Security team's standard notification process.

2.4 Manual Review#

Snyk and AWS Security Hub are the primary advisory sources. The team additionally monitors GitHub Dependabot alerts (GitHub UI, even without active Dependabot configuration).


3. Patch Ownership and Approach#

Patches are applied reactively when vulnerabilities are surfaced by Snyk, GitHub Advanced Security, or AWS Security Hub, prioritised by severity. AWS managed services (ECS Fargate, ALB, DynamoDB, etc.) are patched automatically by AWS with no action required from the team.

Component Owner Patching approach
Docker base image AI connectors team Update FROM python:3.13-slim digest in Dockerfile → rebuild via CI/CD
Python dependencies AI connectors team uv lock --upgrade-package <package> → commit updated uv.lockCI/CD
Terraform external modules AI connectors team Bump ?ref=v{tag} in terragrunt.hclterragrunt plan → apply
Terraform providers AI connectors team Update version constraints in versions.tfterragrunt init → apply
GitHub Actions pinned SHAs AI connectors team Update SHA pin in workflow files
AWS managed services AWS Automatic — no action required

4. Urgency and Severity Handling#

Severity CVSS score Response time target Process
Critical ≥ 9.0 Aim to patch within 48 hours Hotfix branch, apply patch, run Snyk scan, deploy to dev, promote to prod via emergency change (Production gate approval required)
High 7.0 – 8.9 Aim to patch within 7 days Create a tracked GitHub issue; patch and deploy via normal CI/CD pipeline
Medium 4.0 – 6.9 Aim to patch within 30 days Patch when relevant findings are surfaced; deploy via normal CI/CD pipeline
Low < 4.0 Patch when relevant Batched with other updates as convenient

If a patch is not available from the vendor (e.g. a zero-day with no fix), the risk is documented in section 7 (Deviations) and a compensating control is applied where possible (e.g. network-level block, feature disable).


5. Testing Before Deployment#

All patches follow the same pipeline as feature changes — no exceptions, even for Critical CVEs:

  1. Apply patch in a feature branch
  2. CI: Snyk scans — all four scan types run on the PR; confirm the CVE is resolved
  3. Deploy to dev — triggered automatically on merge to main; ECS task definition updated; new container deployed
  4. Smoke tests — automated BDD smoke tests run against dev endpoint
  5. Health checkGET /health must return 200 with the new image
  6. Manual verification (for Critical patches) — team performs a test tool call to confirm functionality
  7. Deploy to prod — requires manual approval via the GitHub Actions Production environment gate

Rollback: ECS supports rapid rollback by updating the service to the previous task definition revision. ECR retains the previous image tagged with the previous commit SHA. Rollback time is typically under 5 minutes.


6. Patch Evidence#

Patch activity is evidenced through existing tooling:

  • Snyk — PR scan results and resolved findings are recorded in the GitHub Security tab per pull request
  • GitHub — commit history and merged PRs provide a full audit trail of dependency and image changes; PR descriptions reference the CVE(s) addressed
  • GitHub Security tab — SARIF findings from Snyk are tracked and closed as fixes are merged

7. Deviations from Supplier Recommendations#

Date Component Patch Reason for delay Risk accepted by Target date