Containers changed how we build and ship software. They also changed how attackers find things we accidentally left behind. If you came back from RSA Conference this year with "DevSecOps" ringing in your ears, you are not alone. The shift-left security conversation has finally reached the point where ignoring container security is not just risky, it is negligent.

The problem is that most teams treat Docker images like black boxes. They build them, push them, deploy them, and never look inside. Meanwhile, those images are carrying hardcoded API keys, running as root, and built on base images with hundreds of known vulnerabilities. Let's fix that.

The Top 10 Container Security Mistakes (and How to Fix Each One)

These are the mistakes we see over and over again when auditing containerized environments. Every single one of them is fixable, and most of the fixes take less than five minutes.

1. Hardcoding Secrets in Docker Images

This is the big one. Teams copy .env files into images, pass API keys as build arguments, or hardcode database credentials directly into application config files. The problem is that Docker images are layered. Even if you delete a secret in a later layer, it still exists in the image history. Anyone who pulls the image can run docker history and see every layer, including the ones where you added and then "removed" that secret.

The fix: Never put secrets in your Dockerfile or image layers. Use runtime injection instead. In Kubernetes, use Secrets objects mounted as volumes or environment variables. In Docker Compose, use the secrets directive. For CI/CD builds, use your platform's secret management (GitHub Actions secrets, GitLab CI variables, etc.) and pass them at runtime, never at build time.

2. Running Containers as Root

By default, Docker containers run as root. That means if an attacker exploits a vulnerability in your application, they have root access inside the container. Combined with a container escape vulnerability (and those do exist), that root access can extend to the host system.

The fix: Add a USER instruction in your Dockerfile to run as a non-root user. We will cover the exact syntax in the Dockerfile template below. In Kubernetes, enforce this with a securityContext that sets runAsNonRoot: true and drops all capabilities.

3. Using Bloated Base Images

Starting your Dockerfile with FROM ubuntu:latest or FROM node:18 pulls in an image with hundreds of packages your application does not need. Every extra package is extra attack surface. A full Ubuntu image might have 400+ packages, and a quick Trivy scan will typically find dozens of CVEs in there, most of which have nothing to do with your app.

The fix: Use minimal base images. Alpine variants (node:18-alpine, python:3.12-alpine) are dramatically smaller. Even better, use distroless images from Google (gcr.io/distroless/nodejs) that contain only your application runtime and nothing else. No shell, no package manager, no extra utilities for attackers to leverage.

4. Not Pinning Image Versions

Using FROM node:latest means your build could produce a completely different image tomorrow than it did today. An upstream compromise or breaking change in the base image will silently flow into your builds. This is exactly the kind of supply chain risk that makes security teams lose sleep.

The fix: Pin to a specific version and digest. Instead of FROM node:latest, use something like FROM node:18.19.1-alpine3.19@sha256:abcd1234.... The digest guarantees you are pulling the exact same image every time, even if the tag gets moved.

5. Ignoring Base Image Vulnerabilities

Even if you write perfectly secure application code, your image inherits every vulnerability in its base image. Teams that never scan their images are deploying known CVEs to production without realizing it. We have seen production images with 50+ high and critical vulnerabilities sitting in packages the application never even uses.

The fix: Scan every image before it gets deployed. Trivy makes this trivially easy and we will walk through the commands below. Integrate scanning into your CI/CD pipeline and block deployments that contain critical or high-severity vulnerabilities.

6. No Resource Limits on Containers

Without CPU and memory limits, a single container can consume all available resources on the host. This is not just a stability problem. It is a security problem. An attacker who gains access to a container without resource limits can launch a denial-of-service attack against the entire node, affecting every other workload running on it.

The fix: Always set resource requests and limits. In Kubernetes:

resources: { requests: { memory: "128Mi", cpu: "250m" }, limits: { memory: "256Mi", cpu: "500m" } }

In Docker, use --memory and --cpus flags. In Docker Compose, set deploy.resources.limits.

7. Unrestricted Network Policies

By default, every pod in a Kubernetes cluster can talk to every other pod. That means if an attacker compromises one workload, they can reach your database, your internal APIs, your secrets management service, everything. This flat network model is the container equivalent of having no firewall at all.

The fix: Implement Kubernetes NetworkPolicies to enforce least-privilege pod-to-pod communication. We have a full example below. At minimum, create a default-deny policy for each namespace and then explicitly allow only the traffic that needs to flow.

8. Using Docker's Default Bridge Network

The default bridge network in Docker allows all containers to communicate with each other by IP address. It also provides no DNS resolution, which encourages hardcoding IPs. This creates a flat network where any compromised container can reach any other.

The fix: Create user-defined bridge networks for each logical group of containers. Containers on different user-defined networks cannot communicate unless you explicitly connect them. This gives you network segmentation with a single command: docker network create my-app-network.

9. Not Using Multi-Stage Builds

Single-stage Dockerfiles that install build tools, compile code, and serve the application all in one image leave compilers, debuggers, package managers, and other development tools in the final image. These tools give attackers more to work with if they get in.

The fix: Use multi-stage builds. Compile your application in a build stage with all the tools you need, then copy only the compiled output into a minimal runtime image. The final image contains just your application binary and its runtime dependencies. Nothing else.

10. Skipping Image Signing and Provenance

Without image signing, you have no way to verify that the image you are deploying is the same image your CI/CD pipeline built. A compromised registry, a man-in-the-middle attack, or even a misconfigured pull policy can result in deploying tampered images to production.

The fix: Sign your images using Cosign (part of the Sigstore project). In Kubernetes, enforce signature verification with an admission controller like Kyverno or OPA Gatekeeper. This ensures that only images signed by your CI/CD pipeline can run in your cluster.


Scanning Images with Trivy

Trivy is a free, open source vulnerability scanner from Aqua Security that works with container images, filesystems, Git repositories, and Kubernetes clusters. It is one of the fastest ways to find vulnerabilities, misconfigurations, and embedded secrets in your containers.

Install Trivy

On macOS:

brew install trivy

On Linux:

curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

Scan a container image for vulnerabilities

trivy image your-registry/your-app:latest

Trivy will pull the image (or use the local cache), analyze every layer, and output a table of vulnerabilities with severity, CVE ID, installed version, and fixed version.

Scan for secrets embedded in the image

trivy image --scanners secret your-registry/your-app:latest

This checks every file in the image for hardcoded passwords, API keys, private keys, and other sensitive data. You would be surprised how often this catches something.

Scan for misconfigurations in a Dockerfile

trivy config ./Dockerfile

This checks your Dockerfile against security best practices and flags issues like running as root, using ADD instead of COPY, or missing health checks.

Fail a CI/CD pipeline on critical vulnerabilities

trivy image --exit-code 1 --severity CRITICAL,HIGH your-registry/your-app:latest

The --exit-code 1 flag makes Trivy return a non-zero exit code when it finds vulnerabilities at the specified severity levels. Wire this into your CI/CD pipeline and deployments with critical vulnerabilities will be blocked automatically.

Generate a scan report in JSON

trivy image -f json -o results.json your-registry/your-app:latest

JSON output is useful for feeding results into dashboards, ticketing systems, or SBOM workflows. If you are already generating SBOMs (and if you are not, check out our SBOM guide), Trivy can consume and scan those as well.


Secure Dockerfile Template

Here is a production-ready Dockerfile template that incorporates the security best practices we have covered. This example uses Node.js, but the patterns apply to any language.

# Stage 1: Build

FROM node:18.19.1-alpine3.19 AS builder

WORKDIR /app

COPY package*.json ./

RUN npm ci

COPY . .

RUN npm run build

# Stage 2: Runtime (production deps only)

FROM node:18.19.1-alpine3.19

RUN addgroup -g 1001 appgroup && adduser -u 1001 -G appgroup -s /bin/sh -D appuser

WORKDIR /app

COPY --from=builder --chown=appuser:appgroup /app/package*.json ./

RUN npm ci --omit=dev

COPY --from=builder --chown=appuser:appgroup /app/dist ./dist

COPY --from=builder --chown=appuser:appgroup /app/package.json ./

USER appuser

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=3s CMD wget -qO- http://localhost:3000/health || exit 1

CMD ["node", "dist/server.js"]

What makes this secure:


Kubernetes NetworkPolicy for Least-Privilege Pod Communication

By default, Kubernetes allows all pods to talk to all other pods. That is convenient for development but dangerous in production. NetworkPolicies let you define exactly which pods can communicate with which other pods. Think of them as firewall rules for your cluster.

Step 1: Default Deny All Traffic

Start by blocking all ingress and egress traffic in your namespace. Then explicitly allow only what is needed.

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: default-deny-all

  namespace: production

spec:

  podSelector: {}

  policyTypes:

  - Ingress

  - Egress

With this policy in place, no pod in the production namespace can send or receive any traffic unless another NetworkPolicy explicitly allows it.

Step 2: Allow Specific Traffic

Now allow your frontend pods to reach your API pods on port 8080, and allow your API pods to reach your database on port 5432.

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: allow-frontend-to-api

  namespace: production

spec:

  podSelector:

    matchLabels:

      app: api

  ingress:

  - from:

    - podSelector:

        matchLabels:

          app: frontend

    ports:

    - protocol: TCP

      port: 8080

This policy says: only pods labeled app: frontend can send traffic to pods labeled app: api, and only on port 8080. Everything else is denied. You would create a similar policy to allow API pods to reach the database.

The key principle here is the same one we apply to cloud IAM and firewall rules: deny by default, allow by exception, and document every exception.


Kubernetes Security Context: The Full Picture

Beyond NetworkPolicies, Kubernetes lets you lock down what a container can do at the OS level using security contexts. Here is what a properly hardened pod spec looks like:

securityContext:

  runAsNonRoot: true

  runAsUser: 1001

  fsGroup: 1001

  seccompProfile:

    type: RuntimeDefault

containers:

- name: app

  securityContext:

    allowPrivilegeEscalation: false

    readOnlyRootFilesystem: true

    capabilities:

      drop:

      - ALL

This configuration ensures the container runs as a non-root user, cannot escalate privileges, has a read-only filesystem (preventing attackers from writing malicious files), uses the default seccomp profile to restrict system calls, and drops all Linux capabilities. Combined with NetworkPolicies and resource limits, this makes a compromised container far less useful to an attacker.


The Post-RSA DevSecOps Reality Check

RSA Conference 2026 was packed with talks about shifting security left. Vendors showcased platforms that promise to automate everything from image scanning to runtime protection. And some of those tools are genuinely excellent. But the reality is that the majority of container security incidents we see in the wild are caused by the same basic mistakes we covered in this post.

"You do not need a six-figure platform to fix container security. You need a secure Dockerfile, a free scanner, and the discipline to use both."

The DevSecOps conversation is not really about tools. It is about making security a natural part of how your team builds software. That starts with a Dockerfile template that your developers actually use, a CI/CD gate that blocks insecure images before they reach production, and a set of Kubernetes policies that enforce guardrails even when someone makes a mistake.

If your team already has these basics in place, then absolutely invest in the advanced tooling. Runtime protection, behavioral analysis, and automated remediation are powerful capabilities. But if you are still running containers as root with hardcoded secrets and unscanned base images, start here first. The ROI on getting these fundamentals right is massive.