Skip to content

Security Baseline and System Hardening#

Relevant controls: SC.05.01, SC.05.02, SC.05.04, SC.05.06

Compliance reference

This document supports ITRA control SC-05 System Hardening.


1. Container Hardening#

All MCP containers share the same base Dockerfile pattern (multi-stage build, python:3.13-slim runtime image). Hardening measures applied at build time:

Hardening measure Applied? Notes
Minimal base image (python:3.13-slim) Yes No full Python or Debian image; no unnecessary OS tools
Multi-stage build Yes Builder stage uses uv image; runner stage copies only .venv + src/
No dev dependencies in image Yes uv sync --no-dev --frozen --no-install-project in builder
No hardcoded secrets in image Yes Secrets injected via ECS secrets block from SSM at runtime
PYTHONDONTWRITEBYTECODE=1 Yes Prevents .pyc files being written to filesystem
Non-root user No Containers run as root (no USER instruction in Dockerfiles) — gap to address
No shell access in production Partial python:3.13-slim includes /bin/sh; no explicit shell removal

Gap — non-root user: No USER directive is present in any MCP Dockerfile. Containers currently run as root inside the ECS task. The read-only root filesystem and noexec tmpfs provide some mitigation, but adding a dedicated non-root user is a recommended hardening improvement.


2. ECS Task Configuration#

Security-relevant settings from the ECS task definition (base YAML, all MCPs):

Setting Value Notes
Privileged mode Disabled Not set; Fargate does not support privileged mode
Network mode awsvpc Each task gets its own ENI; no host networking
Read-only root filesystem Yes (SharePoint, Outlook); No (Teams, Databricks) readonlyRootFilesystem: true in task definition
/tmp tmpfs 64 MB, rw,noexec,nosuid SharePoint, Outlook only; prevents executable writes to /tmp
Secrets source SSM SecureString via ECS secrets block Client ID and secret never appear as plaintext environment variables or in logs
CPU / memory 256 CPU units (0.25 vCPU) / 512 MB Resource limits enforced by Fargate
Log driver awslogs Stdout/stderr captured to CloudWatch Logs; no local log files
Health check GET /health (HTTP, 30s interval, 3 retries) ECS replaces unhealthy tasks automatically
Capacity provider FARGATE_SPOT AWS-managed infrastructure; no direct EC2 access

3. Network Access Restrictions#

ECS tasks run in private subnets with no direct internet exposure. Traffic is controlled by security groups:

Rule Direction Port / Protocol Source / Destination
HTTP from ALB Inbound 80 TCP ALB security group only
All other inbound Blocked No other inbound rules
HTTPS to Azure AD / Microsoft Graph Outbound 443 TCP Via NAT Gateway → internet
HTTPS to Databricks workspaces Outbound 443 TCP Via NAT Gateway → internet
HTTPS to DynamoDB Outbound 443 TCP Via VPC Gateway endpoint (no NAT)
HTTPS to Kinesis Firehose Outbound 443 TCP Via VPC
HTTPS to SSM Parameter Store Outbound 443 TCP Via VPC
HTTPS to ECR Outbound 443 TCP Via S3 Gateway VPC endpoint
All other outbound Allowed AWS Fargate requirement (no restrictive egress SG)

ECS tasks are never directly reachable from the internet — only the ALB is internet-facing. The ALB enforces TLS termination and redirects HTTP → HTTPS. See data-storage-encryption.md for TLS policy details and encryption in transit across all platform components.


4. Session and Token Timeout#

All tokens have a ~1 hour TTL enforced by Azure AD. There are no long-lived or non-expiring sessions. The OBO flow, token caching in DynamoDB, and TTL mechanics are described in access-management.md; DynamoDB TTL retention is covered in data-storage-encryption.md.


5. Time Synchronisation#

ECS Fargate tasks synchronise time with the AWS NTP infrastructure automatically. The AWS hypervisor provides an NTP endpoint at 169.254.169.123 (Amazon Time Sync Service), which all Fargate tasks use by default with no application-level configuration required. This ensures log timestamps and JWT nbf/exp validation are accurate.


6. Default Credentials#

Category Status
Default OS accounts Not applicable — Fargate containers have no persistent OS user database
Default application credentials None — all credentials are provisioned per-environment via SSM; no defaults exist
Azure AD client secrets Provisioned and rotated per secrets-management.md
AWS IAM roles No passwords or access keys — roles are assumed by ECS agent/task via instance metadata; no human login capability
DynamoDB No application-level credentials — accessed via IAM role
Kinesis Firehose No application-level credentials — accessed via IAM role

No shared, default, or hard-coded credentials exist anywhere in the platform.


7. Vulnerability and Security Scanning#

Vulnerability scanning (Snyk CI scans, GitHub Advanced Security, AWS Security Hub, linting) is covered in full in vulnerability-patch-management.md.


8. Secure Development#

Code Review#

All changes to main require a pull request. No direct pushes to main are permitted. Pull requests must pass: 1. Lint (ruff + black) — blocks build 2. Snyk scans — advisory; findings must be reviewed before merge

Production deployments additionally require manual approval via the GitHub Actions Production environment gate.

Security Guidelines#

Developers follow: - OWASP Top 10 awareness — input validation, injection prevention, broken access control - OWASP Secure MCP Development Guide — reviewed periodically via the /security-review skill, which cross-references each MCP against the guide and creates GitHub issues for identified gaps - No secrets in code — enforced by GHAS secret scanning, push protection, and .gitignore coverage for .env files (see secrets-management.md §4) - Minimum permissions — all IAM roles, Azure AD scopes, and Graph API permissions are scoped to read-only; reviewed in code review for any new additions

Dependency Management#

Dependency pinning, SBOM generation, and update processes are covered in vulnerability-patch-management.md.