You've now written Dockerfiles for the Task API through six lessons. Each time, you made similar decisions: base image selection, dependency strategy, layer optimization, security posture. What if you could encode this reasoning so AI can apply it consistently to ANY project?
That's what skills do. A skill captures domain expertise in a format that AI can apply reliably across contexts. Instead of re-explaining your Docker preferences every time, you encode them once. The AI then reasons through your principles for each new project, producing Dockerfiles that match your production standards.
This lesson teaches you to transform your Docker knowledge into a reusable skill. You'll learn the Persona + Questions + Principles pattern that makes skills effective, create a complete SKILL.md file, and test it against projects you haven't seen before.
Not every workflow deserves a skill. Creating skills takes effort. You need to identify patterns that justify that investment.
Three criteria determine if a pattern is worth encoding:
Docker containerization meets all three. You containerize services repeatedly, each Dockerfile involves 8-10 decisions, and faster containerization accelerates deployment across all projects.
Patterns that DON'T justify skills:
Exercise: Before continuing, list three patterns from your own work that might justify skills. Apply the three criteria to each.
Effective skills follow a consistent structure. Each component serves a specific purpose:
A persona establishes HOW the AI should think. It's not "you are an expert"—that's too vague. A good persona specifies the perspective and priorities that produce right thinking.
Weak persona:
Strong persona:
The strong persona tells the AI:
Analysis questions force the AI to gather context before acting. Without them, AI produces generic solutions. With them, AI reasons about YOUR specific situation.
Generic approach (no questions):
Context-aware approach (with questions):
Each question targets a decision point. The answers shape the Dockerfile.
Principles are rules that apply regardless of context. They encode your hard-won lessons about what works in production.
Docker containerization principles:
These aren't suggestions. They're non-negotiables that every Dockerfile should follow unless there's explicit justification to deviate.
Let's build the complete skill, component by component.
P4: UV for Python Always use UV package manager for Python. It's 10-100x faster than pip:
P5: Lock Files Required Use requirements.txt with pinned versions or uv.lock for reproducibility. Never install without version constraints in production.
P6: Alpine Default Start with python:3.12-alpine (50MB). Fall back to slim (150MB) only if alpine causes compatibility issues with specific packages.
P7: Pin Versions Use python:3.12-alpine, not python:alpine. Explicit versions prevent surprise breakage when base images update.
P8: Non-Root User Create and switch to non-root user:
P9: No Secrets in Image Never COPY .env, credentials, or API keys. Inject via environment at runtime. Use Docker secrets or Kubernetes secrets for sensitive data.
P10: Minimal Installed Packages Only install what runtime needs. Build tools stay in build stage.
P11: Health Checks Mandatory Every production container needs HEALTHCHECK:
P12: Environment Variables for Configuration All configuration via ENV. No hardcoded values in Dockerfile.
P13: Volume Mount, Don't COPY Files >100MB (models, datasets) should be volume-mounted at runtime:
Never embed large files in the image.
Think like a DevOps engineer who optimizes container images for production Kubernetes deployments. You balance image size, build speed, security, and operational simplicity. When tradeoffs exist:
Before generating a Dockerfile, analyze the project:
When generating Dockerfiles, produce:
Use this skill when: