The problem
A Kotlin microservices platform had Docker images ranging from 3.2GB to 4.7GB each. Deployments took 15-20 minutes per service. K8s nodes ran out of disk space pulling images. The AWS ECR bill hit $8,400/month just for container storage. CI/CD pipelines timed out regularly. Rolling updates caused 10-minute service disruptions as nodes struggled to pull massive images.
How AI created this issue
The team asked ChatGPT to create a Dockerfile for their Ktor application. ChatGPT generated a typical "kitchen sink" approach:
# ChatGPT's Dockerfile - includes everything
FROM ubuntu:latest
# Install everything that might be needed
RUN apt-get update && apt-get install -y \
openjdk-17-jdk \
maven \
gradle \
git \
curl \
wget \
vim \
nano \
build-essential \
python3 \
nodejs \
npm \
postgresql-client \
mysql-client \
redis-tools \
&& rm -rf /var/lib/apt/lists/*
# Copy entire project directory
COPY . /app
WORKDIR /app
# Build application (downloads all dependencies again)
RUN gradle build
# Install additional tools
RUN npm install -g yarn pm2 nodemon
# Run with full JDK
CMD ["java", "-jar", "build/libs/app.jar"]
ChatGPT's Dockerfile used a full Ubuntu base, installed both Maven and Gradle, included unnecessary development tools, and copied the entire project directory including .git, node_modules, and test files. The AI treated containers like VMs, including everything "just in case."
The solution
- Multi-stage builds with minimal runtime:
# Optimized multi-stage Dockerfile # Stage 1: Build with JDK FROM gradle:7.6-jdk17-alpine AS builder WORKDIR /build # Cache dependencies separately COPY build.gradle.kts settings.gradle.kts ./ RUN gradle dependencies --no-daemon # Build only what's needed COPY src ./src RUN gradle shadowJar --no-daemon # Stage 2: Minimal runtime FROM eclipse-temurin:17-jre-alpine RUN apk add --no-cache dumb-init # Non-root user RUN addgroup -g 1001 ktor && \ adduser -D -u 1001 -G ktor ktor # Copy only the JAR COPY --from=builder --chown=ktor:ktor \ /build/build/libs/*-all.jar /app/app.jar USER ktor EXPOSE 8080 # Use dumb-init to handle signals properly ENTRYPOINT ["dumb-init", "--"] CMD ["java", "-XX:MaxRAMPercentage=75", "-jar", "/app/app.jar"]
- Distroless images for ultimate size reduction:
# Even smaller with distroless FROM gcr.io/distroless/java17-debian11:nonroot COPY --from=builder \ /build/build/libs/*-all.jar /app/app.jar EXPOSE 8080 ENTRYPOINT ["java", "-jar", "/app/app.jar"] # Final size: 189MB
- Dependency optimization: Removed unused dependencies and used jlink for custom JRE
- Layer caching strategy: Structured Dockerfile to maximize cache hits
- Container registry optimization: Enabled compression and deduplication
The results
- Image size: 3.2-4.7GB → 189MB (94% reduction)
- Deployment time: 15-20min → 90 seconds
- ECR costs: $8,400 → $520/month (94% savings)
- K8s node disk usage: 85% → 22%
- CI/CD success rate: 68% → 99.5%
- Rolling update downtime: 10min → 30 seconds
The team learned that AI examples often prioritize development convenience over production efficiency. Containers aren't VMs - they should contain only what's needed to run the application. They now use multi-stage builds by default and treat image size as a key performance metric.
Ready to fix your codebase?
Let us analyze your application and resolve these issues before they impact your users.
Get Diagnostic Assessment →