Docker Guide for Macumba Travel Backend¶
This guide provides a comprehensive overview of Docker usage in the Macumba Travel Backend project, which consists of a FastAPI backend and a Node.js backend.
Table of Contents¶
- Introduction to Docker
- Docker Components in Our Project
- FastAPI Backend Docker Setup
- Node.js Backend Docker Setup
- Docker Compose Configuration
- Getting Started With Docker
- Common Docker Commands
- Troubleshooting
- Best Practices
- Docker in Production
- Further Learning Resources
Introduction to Docker¶
Docker is a platform that uses containerization technology to package applications and their dependencies together. This packaging ensures consistency across different environments (development, testing, production) and makes deployment simpler.
Key Benefits¶
- Consistency: "It works on my machine" is no longer an issue
- Isolation: Applications run in isolated environments
- Efficiency: Containers share the host OS kernel, making them lightweight
- Scalability: Easy to scale applications horizontally
- Version Control: Images can be versioned and tracked
Core Concepts¶
- Container: A lightweight, standalone, executable package that includes everything needed to run an application
- Image: A template used to create containers (think of it as a snapshot of a container)
- Dockerfile: A text file containing instructions to build a Docker image
- Docker Compose: A tool for defining and running multi-container Docker applications
- Volume: A persistent data storage mechanism for containers
Docker Components in Our Project¶
Our project uses Docker to containerize several components:
- FastAPI Backend: Our Python-based REST API
- Node.js Backend: Our JavaScript/TypeScript-based services
- PostgreSQL Database: For data persistence
- Redis: For caching and session management
- Adminer: For database management through a web interface
FastAPI Backend Docker Setup¶
Dockerfile Analysis¶
The FastAPI backend uses a Dockerfile located at the root of the project:
FROM python:3.12-slim
WORKDIR /app
# Install system dependencies including PostgreSQL development libraries and Rust
RUN apt-get update && apt-get install -y \
postgresql-client \
libpq-dev \
gcc \
curl \
build-essential \
&& curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Add Rust to PATH
ENV PATH="/root/.cargo/bin:${PATH}"
# Install Poetry
RUN pip install poetry
# Copy only pyproject.toml and poetry.lock (if it exists)
COPY pyproject.toml poetry.lock* ./
# Configure poetry to not use a virtual environment in the container
RUN poetry config virtualenvs.create false
# Install dependencies
RUN poetry install --no-interaction --no-ansi --no-root
# Copy application code
COPY . .
# Set Python environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PORT=8000
# Set a generous timeout for uvicorn to give the app time to initialize
ENV TIMEOUT=120
# Command to run the application
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--timeout-keep-alive", "120"]
Let's break down what this Dockerfile does: 1. Base Image: Uses Python 3.12 slim image as the foundation 2. Setup: Sets the working directory to /app 3. Dependencies: Installs system packages including PostgreSQL client, Rust, and build tools 4. Poetry: Installs Poetry (Python dependency manager) and configures it 5. Project Dependencies: Copies and installs Python dependencies using Poetry 6. Code: Copies the application code into the container 7. Environment: Sets environment variables for Python behavior 8. Startup Command: Runs the FastAPI application using Uvicorn
Node.js Backend Docker Setup¶
Dockerfile Analysis¶
The Node.js backend uses a Dockerfile located in the node-backend directory:
FROM node:20-alpine
WORKDIR /app
# Install postgresql-client for database interaction
RUN apk add --no-cache postgresql-client
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm ci
# Copy source code
COPY . .
# Expose port 8000
EXPOSE 8000
# Use a shell script as an entrypoint
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
# Default command
CMD ["npm", "start"]
Key components: 1. Base Image: Uses Node.js 20 Alpine (lightweight Linux distribution) 2. Setup: Sets the working directory to /app 3. Dependencies: Installs PostgreSQL client and Node.js dependencies 4. Code: Copies the application code into the container 5. Port: Exposes port 8000 for incoming connections 6. Entrypoint: Uses a custom script (docker-entrypoint.sh) for initialization 7. Startup Command: Runs the Node.js application with npm start
Entry Point Script¶
The Node.js backend uses an entry point script (docker-entrypoint.sh) that: 1. Waits for the database to be ready 2. Runs migrations in development mode 3. Executes the command specified in the Dockerfile (or overridden by Docker Compose)
Docker Compose Configuration¶
Our project uses Docker Compose to orchestrate multiple containers. We have three Docker Compose files for different scenarios:
1. docker-compose.yml (FastAPI Only)¶
This file sets up just the FastAPI backend with its dependencies: - FastAPI Backend: Builds from the main Dockerfile - PostgreSQL: For database services - Redis: For caching
2. docker-compose.node.yml (Node.js Only)¶
This file sets up just the Node.js backend with its dependencies: - Node.js Backend: Builds from the Node.js Dockerfile - PostgreSQL: For database services - Redis: For caching - Adminer: For database management
3. docker-compose.both.yml (Both Backends)¶
This file sets up both backends with shared dependencies: - FastAPI Backend: On port 8000 - Node.js Backend: On port 8001 - PostgreSQL: Shared database - Redis: Shared cache - Adminer: For database management
Example configuration from docker-compose.both.yml:
services:
fastapi-backend:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
env_file:
- .env
environment:
- PYTHONPATH=/app
volumes:
- ./:/app
depends_on:
- db
- redis
node-backend:
build:
context: ./node-backend
dockerfile: Dockerfile
ports:
- "8001:8000" # Different host port to avoid conflicts
env_file:
- ./node-backend/.env
environment:
- NODE_ENV=development
- PORT=8000
volumes:
- ./node-backend:/app
- /app/node_modules
depends_on:
- db
- redis
restart: always
command: npm run dev
# ... database, redis, adminer configurations
Getting Started With Docker¶
Prerequisites¶
- Install Docker Desktop
- Clone the Macumba Travel Backend repository
- Set up environment variables (copy
.env.exampleto.envand customize as needed)
Starting the Application¶
FastAPI Backend Only¶
Node.js Backend Only¶
Both Backends¶
Accessing the Application¶
- FastAPI Backend: http://localhost:8000
- Node.js Backend: http://localhost:8001 (when using docker-compose.both.yml)
- Adminer (Database UI): http://localhost:8080
- System: PostgreSQL
- Server: db
- Username: postgres (or as set in .env)
- Password: postgres (or as set in .env)
- Database: macumba (or as set in .env)
Stopping the Application¶
Common Docker Commands¶
Container Management¶
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop <container_id>
# Remove a container
docker rm <container_id>
Image Management¶
# List images
docker images
# Remove an image
docker rmi <image_id>
# Clean up unused images
docker image prune
Logs and Debugging¶
# View container logs
docker logs <container_id>
# View logs with follow option
docker logs -f <container_id>
# Execute a command in a running container
docker exec -it <container_id> <command>
# Open a shell in a running container
docker exec -it <container_id> bash # or sh for Alpine-based images
Docker Compose Specific¶
# Build images without starting containers
docker-compose build
# Start containers in detached mode
docker-compose up -d
# View logs from all containers
docker-compose logs
# View logs from a specific service
docker-compose logs <service_name>
# Stop and remove containers
docker-compose down
# Stop and remove containers, networks, volumes, and images
docker-compose down -v --rmi all
Troubleshooting¶
Common Issues and Solutions¶
Port Conflicts¶
Issue: "Error starting userland proxy: port is already allocated" Solution: Change the port mapping in docker-compose.yml or stop the application using that port
Database Connection Issues¶
Issue: "Connection refused" when connecting to the database Solution: 1. Check if the database container is running 2. Verify database credentials in .env file 3. Ensure that your application is waiting for the database to be ready
Volume Permission Issues¶
Issue: Permission denied errors when writing to mounted volumes Solution: Adjust permissions on the host directory or use named volumes instead of bind mounts
Container Not Starting¶
Issue: Container exits immediately after starting Solution: 1. Check logs with docker logs <container_id> 2. Make sure environment variables are correctly set 3. Verify that dependencies are available
Best Practices¶
Docker Best Practices¶
- Use Specific Tags: Avoid using
latesttag for production, use specific versions - Optimize Dockerfile: Use multi-stage builds and layer caching
- Minimize Image Size: Remove unnecessary files and use lightweight base images
- Use Environment Variables: For configuration that changes between environments
- Security: Don't run containers as root, scan images for vulnerabilities
- Health Checks: Implement health checks to monitor container status
Project-Specific Best Practices¶
- Development Mode:
- Use volume mounts for code to enable hot reloading
- Set
NODE_ENV=developmentorPYTHONPATH=/appfor development-specific features - Production Mode:
- Build optimized images without development dependencies
- Use proper logging configurations
- Implement proper health checks
- Environment Variables:
- Never commit sensitive information in Dockerfiles or images
- Use
.envfiles for development and secrets management tools for production
Docker in Production¶
In production, we deploy our Docker containers to Kubernetes. Key differences from development:
- Environment Variables: Production credentials and configuration
- Resource Limits: Specific CPU and memory limits
- Scaling: Multiple replicas for high availability
- Networking: More complex network policies and ingress rules
- Logging & Monitoring: Integration with monitoring systems
CI/CD Process¶
Our continuous integration pipeline: 1. Builds Docker images from the Dockerfile 2. Runs tests in containers 3. Pushes images to our container registry 4. Deploys to staging/production environments
Further Learning Resources¶
Docker Documentation¶
Interactive Learning¶
- Docker Labs - Hands-on tutorials
- Play with Docker - Online Docker playground
Books and Courses¶
- "Docker Deep Dive" by Nigel Poulton
- "Docker in Practice" by Ian Miell and Aidan Hobson Sayers
- Docker for Developers (Pluralsight)
Community Resources¶
- Docker Community Forums
- Stack Overflow's Docker tag