TL;DR
Docker works great for PHP in production, but the devil is in the details. Use multi-stage builds to keep images small. Configure PHP-FPM properly for container workloads. Handle signals correctly for graceful shutdowns. Log to stdout. And never run as root.
Why Docker for PHP?
Before diving into lessons learned, let's address the "why." PHP has worked fine without Docker for decades. Why add the complexity?
- Consistency: The same container runs locally, in CI, and in production
- Isolation: Multiple PHP versions on the same host without conflicts
- Immutability: No "it works on my machine" - the image is the artifact
- Scaling: Spin up new containers in seconds, not minutes
- Infrastructure as Code: The Dockerfile documents exactly how your environment is configured
That said, Docker introduces its own complexity. Here's what I've learned running PHP containers in production.
Lesson 1: Multi-stage builds are essential
A naive Dockerfile that installs Composer, npm, and all development dependencies creates massive images. Multi-stage builds solve this elegantly.
# Stage 1: Build assets
FROM node:20-alpine AS assets
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY resources/ resources/
COPY vite.config.js tailwind.config.js postcss.config.js ./
RUN npm run build
# Stage 2: Install PHP dependencies
FROM composer:2 AS vendor
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install \
--no-dev \
--no-scripts \
--no-autoloader \
--prefer-dist
COPY . .
RUN composer dump-autoload --optimize
# Stage 3: Production image
FROM php:8.3-fpm-alpine AS production
# Install production extensions
RUN apk add --no-cache \
libpng-dev \
libzip-dev \
icu-dev \
&& docker-php-ext-install \
pdo_mysql \
gd \
zip \
intl \
opcache \
pcntl
# Copy application
WORKDIR /var/www/html
COPY --from=vendor /app/vendor vendor/
COPY --from=assets /app/public/build public/build/
COPY . .
# Set ownership
RUN chown -R www-data:www-data storage bootstrap/cache
USER www-data
EXPOSE 9000
CMD ["php-fpm"]
This approach:
- Keeps Node.js and npm out of the production image
- Keeps Composer out of the production image
- Only includes production dependencies
- Results in an image around 100MB instead of 500MB+
Lesson 2: Configure PHP-FPM for containers
The default PHP-FPM configuration is designed for bare-metal servers, not containers. Several settings need adjustment.
Process management
Containers typically run a single service. The dynamic process manager wastes resources in this context:
; /usr/local/etc/php-fpm.d/www.conf
; Use static process management
pm = static
; Set based on container memory allocation
; Rule of thumb: (container memory - 128MB) / average request memory
pm.max_children = 10
; Request lifecycle
pm.max_requests = 500
request_terminate_timeout = 60s
Logging configuration
Docker expects logs on stdout/stderr. PHP-FPM defaults to files:
; Log to stderr for Docker
access.log = /proc/self/fd/2
error_log = /proc/self/fd/2
; Capture worker stdout
catch_workers_output = yes
decorate_workers_output = no
OPcache for production
; /usr/local/etc/php/conf.d/opcache.ini
opcache.enable=1
opcache.memory_consumption=128
opcache.max_accelerated_files=10000
opcache.validate_timestamps=0
opcache.revalidate_freq=0
opcache.preload=/var/www/html/preload.php
opcache.preload_user=www-data
Important: OPcache Validation
Setting opcache.validate_timestamps=0 means PHP won't check if files have changed. This is correct for production (deploy new containers for code changes) but will confuse you in development if you forget to disable it.
Lesson 3: Handle signals properly
When Kubernetes sends SIGTERM to your container (or Docker stops it), you want graceful shutdown. PHP-FPM handles this... if you configure it correctly.
; Allow graceful shutdown
process_control_timeout = 10
And in your Dockerfile, don't wrap php-fpm in a shell script:
# Bad - signals go to shell, not php-fpm
CMD ["sh", "-c", "php-fpm"]
# Good - signals go directly to php-fpm
CMD ["php-fpm"]
For Laravel queue workers in containers, ensure proper signal handling:
# Queue worker container
CMD ["php", "artisan", "queue:work", "--tries=3", "--timeout=60"]
Laravel's queue worker listens for SIGTERM and finishes the current job before exiting. But you need to configure Kubernetes to wait:
# kubernetes deployment
spec:
terminationGracePeriodSeconds: 120 # Wait for job to finish
containers:
- name: worker
lifecycle:
preStop:
exec:
command: ["php", "artisan", "queue:restart"]
Lesson 4: Never run as root
Running containers as root is a security risk. If an attacker escapes the container, they have root on the host (in some configurations).
# Create a non-root user
RUN addgroup -g 1000 app && adduser -u 1000 -G app -D app
# Set ownership before switching users
RUN chown -R app:app /var/www/html/storage /var/www/html/bootstrap/cache
# Switch to non-root user
USER app
This requires careful attention to file permissions. The most common issues:
- Storage directory not writable
- Log files not writable
- Cache directories not writable
Lesson 5: Health checks matter
Kubernetes and load balancers need to know if your container is healthy. A simple HTTP endpoint isn't enough - it needs to actually check dependencies.
// routes/web.php
Route::get('/health', function () {
$checks = [];
// Database connection
try {
DB::connection()->getPdo();
$checks['database'] = 'ok';
} catch (\Exception $e) {
$checks['database'] = 'failed: ' . $e->getMessage();
}
// Redis connection
try {
Redis::ping();
$checks['redis'] = 'ok';
} catch (\Exception $e) {
$checks['redis'] = 'failed: ' . $e->getMessage();
}
// Storage writable
$testFile = storage_path('health-check-' . uniqid());
if (@file_put_contents($testFile, 'test') && @unlink($testFile)) {
$checks['storage'] = 'ok';
} else {
$checks['storage'] = 'failed: not writable';
}
$allHealthy = !in_array(false, array_map(
fn($v) => $v === 'ok',
$checks
));
return response()->json([
'status' => $allHealthy ? 'healthy' : 'unhealthy',
'checks' => $checks,
], $allHealthy ? 200 : 503);
});
# Dockerfile healthcheck
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
CMD curl -f http://localhost:9000/health || exit 1
Lesson 6: Separate concerns with multiple containers
Don't try to run everything in one container. The "one process per container" philosophy exists for good reasons.
A typical Laravel deployment might have:
# docker-compose.prod.yml
services:
nginx:
image: nginx:alpine
volumes:
- static_assets:/var/www/html/public:ro
depends_on:
- php
php:
image: myapp:latest
environment:
- APP_ENV=production
worker:
image: myapp:latest
command: php artisan queue:work
environment:
- APP_ENV=production
scheduler:
image: myapp:latest
command: php artisan schedule:work
environment:
- APP_ENV=production
Each container has a single responsibility:
- nginx: Serves static files, proxies PHP requests
- php: Handles HTTP requests via PHP-FPM
- worker: Processes queue jobs
- scheduler: Runs scheduled tasks
Lesson 7: Environment configuration
Never bake secrets into images. Use environment variables:
# Don't do this
ENV APP_KEY=base64:your-key-here # BAD!
# Do this instead - inject at runtime
ENV APP_KEY=${APP_KEY}
For Laravel, consider the configuration caching carefully:
# Build-time caching (env must match at runtime)
RUN php artisan config:cache
# OR runtime caching (via entrypoint)
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
#!/bin/sh
# docker-entrypoint.sh
# Cache configuration at container start
php artisan config:cache
php artisan route:cache
php artisan view:cache
# Execute the main command
exec "$@"
Lesson 8: Image tagging strategy
Don't just use :latest. It's impossible to track what's deployed.
# Good tagging strategy
docker build -t myapp:${GIT_SHA} -t myapp:${GIT_TAG} .
# Example tags:
# myapp:abc123def (commit hash - always unique)
# myapp:v1.2.3 (semantic version - for releases)
# myapp:main (branch - for development)
Your CI pipeline should push multiple tags:
# GitHub Actions
- name: Build and push
uses: docker/build-push-action@v5
with:
push: true
tags: |
myregistry/myapp:${{ github.sha }}
myregistry/myapp:${{ github.ref_name }}
Lesson 9: Layer caching in CI
Docker builds can be slow in CI without proper caching. Use BuildKit and cache mounts:
# syntax=docker/dockerfile:1.4
# Cache Composer downloads
RUN --mount=type=cache,target=/root/.composer/cache \
composer install --no-dev --optimize-autoloader
# Cache npm downloads
RUN --mount=type=cache,target=/root/.npm \
npm ci
# GitHub Actions with layer caching
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v5
with:
cache-from: type=gha
cache-to: type=gha,mode=max
Lesson 10: Debugging in production
You will need to debug production containers. Make it possible:
# Include debugging tools in a separate stage
FROM production AS debug
USER root
RUN apk add --no-cache \
strace \
tcpdump \
curl \
vim
USER www-data
But never deploy the debug image to production - use it only for troubleshooting:
# Deploy production image normally
kubectl set image deployment/myapp myapp=myregistry/myapp:abc123
# For debugging, temporarily swap in debug image
kubectl debug -it pod/myapp-xxx --image=myregistry/myapp:abc123-debug
Common pitfalls to avoid
- Storing sessions in files: Use Redis or database sessions - containers are ephemeral
- Local file uploads: Use S3 or another object store
- Running Composer in production: Install dependencies at build time
- Ignoring container logs: Set up proper log aggregation
- Hardcoding hostnames: Use environment variables and DNS
The production checklist
Before deploying a PHP Docker container to production, verify:
- Multi-stage build with minimal final image
- Running as non-root user
- Proper signal handling for graceful shutdown
- Health check endpoint that verifies dependencies
- Logs going to stdout/stderr
- OPcache configured for production
- Secrets injected via environment, not baked in
- Proper resource limits configured
- Persistent storage (sessions, uploads) using external services
Need help containerizing your PHP application for production? I've deployed Docker containers for organizations of all sizes. Let's talk.