Day 14: Docker Compose yml for full stack. Success: Up/down entire app with one command.

Lesson 14 60 min

Day 14: Docker Compose for Full-Stack Orchestration. Success: Up/Down Entire App with One Command.

Welcome back, future architects!

Today, we're taking a monumental leap from running individual components to orchestrating our entire AI-powered CRM system with a single command. This isn't just about convenience; it's about adopting a mindset that scales, a mindset crucial for building distributed systems, whether they handle a thousand requests or a hundred million per second.

In the previous lesson, Day 13, you mastered TanStack Query for efficient data fetching and caching. Now, imagine if every time you wanted to test that beautiful frontend with your backend and database, you had to manually start each service in separate terminal windows. Tedious, error-prone, and unsustainable. Today, we fix that with Docker Compose.

The "Why" Behind Docker Compose: Beyond Localhost Monoliths

For most new engineers, the journey often starts with a single application running on localhost. As we build an AI-powered CRM, we quickly realize it's a collection of services: a React frontend, a Node.js backend API, a PostgreSQL database, perhaps a Redis cache, and eventually, dedicated microservices for AI inference or real-time communication. Each of these is a distinct entity, often with its own dependencies and runtime environment.

In a hyper-scale system, these services live on different machines, communicate over networks, and are managed by sophisticated orchestrators like Kubernetes. But how do you develop and test such a system on your laptop? That's where Docker Compose shines.

Core Concept: Declarative Service Orchestration

Docker Compose allows you to define a multi-container Docker application in a single YAML file. Instead of imperative commands (start this, then start that, link them manually), you use a declarative approach. You describe the desired state of your application—what services it has, what images they use, how they connect, what ports they expose, what volumes they need—and Docker Compose handles the complexity of bringing that state to life.

This is a fundamental shift in thinking. You're no longer thinking about individual processes but about a system composed of interconnected services. This mental model is paramount when you transition to more complex orchestrators in production.

How Docker Compose Fits in Our CRM System

Our AI-powered CRM will eventually have:

  • A frontend service (React)

  • A backend API service (Node.js/Express)

  • A database service (PostgreSQL)

  • Later: ai-inference service, realtime-notifications service, etc.

Docker Compose acts as the conductor for this orchestra.

Component Architecture: A Symphony of Containers

Component Architecture

Docker Compose Orchestrator Internal Virtual Network (Isolation Layer) Frontend React App Port 3000 Backend Node.js API Port 5000 Database PostgreSQL Port 5432 Browser

Imagine each service (frontend, backend, database) as a musician. Each musician knows their part, but they need a conductor to start them, keep them in sync, and ensure they play together harmoniously. Docker Compose is that conductor.

Our docker-compose.yml file will define:

  1. frontend: Builds from our React app's Dockerfile, exposes a port (e.g., 3000).

  2. backend: Builds from our Node.js app's Dockerfile, exposes a port (e.g., 5000), and depends on the database.

  3. database: Uses a standard PostgreSQL image, maps data to a volume for persistence, and is accessible by the backend.

Control Flow, Data Flow, and State Changes

Flowchart

docker-compose up Parse YAML Manifest Build / Pull Images Start Database Start Backend Start Frontend Environment Healthy

When you run docker-compose up:

  • Control Flow: Docker Compose reads the YAML file. It starts services in the correct order (e.g., database before backend). It manages their networking, ensuring they can find each other by service name (e.g., the backend can connect to database:5432).

  • Data Flow: Data flows between services over the internal Docker network. The frontend makes API calls to the backend. The backend queries and writes to the database. Critically, because they're on a shared network, you reference services by their service name defined in docker-compose.yml (e.g., http://backend:5000 from the frontend, or postgres://user:pass@database:5432/crmdb from the backend). This abstraction is a core insight: you don't need to know actual IP addresses.

  • State Changes: docker-compose up brings the system to a Running state. docker-compose down transitions it back to Stopped, cleaning up resources. During development, you might restart individual services or the whole stack.

Real-World Production Systems and Docker Compose

While Docker Compose is primarily a development and testing tool, the principles it teaches are directly transferable to production.

  • Local Simulation: It allows you to run a near-production-like environment on your laptop, catching integration bugs early.

  • Team Collaboration: Every developer on the team can spin up the identical environment, eliminating "it works on my machine" issues.

  • Stepping Stone to Kubernetes: Understanding services, networks, volumes, and dependencies in Compose provides a solid foundation for grasping Kubernetes Deployments, Services, Pods, and PersistentVolumes. For ultra-high-scale systems, Kubernetes or similar orchestrators are essential, but Docker Compose builds the mental model.

The real insight here is that even though our CRM isn't handling 100 million requests per second yet, by using Docker Compose, we're building it with the architectural patterns of systems that do. We're practicing distributed system management from Day 1.

Assignment: Orchestrate Your CRM

Your task is to create a docker-compose.yml file that orchestrates our frontend, backend, and database services.

Steps:

  1. Create frontend/Dockerfile: A simple Dockerfile for our React app (from previous lessons).

  2. Create backend/Dockerfile: A simple Dockerfile for our Node.js API (from previous lessons).

  3. Create docker-compose.yml in the project root.

  • Define three services: frontend, backend, database.

  • For frontend and backend, use build: ./<service_dir> to build from their respective Dockerfiles.

  • For database, use the postgres:13-alpine image.

  • Map necessary ports (e.g., 3000:3000 for frontend, 5000:5000 for backend).

  • Configure the database service with environment variables for user, password, and database name.

  • Add a volumes section for the database to persist data.

  • Ensure the backend service depends_on the database service.

  • Set up a network for services to communicate.

Success Criteria:

  • You can run docker-compose up --build from the root directory.

  • All three services start successfully.

  • You can access the frontend in your browser (e.g., http://localhost:3000).

  • The frontend successfully fetches contacts from the backend.

  • The backend successfully connects to the PostgreSQL database.

  • You can run docker-compose down to stop and remove all services.

Solution Hints

  • Dockerfile basics: Remember FROM, WORKDIR, COPY, RUN, EXPOSE, CMD.

  • docker-compose.yml structure:

yaml
version: '3.8'
services:
  frontend:
    build: ./frontend
    ports:
      - "3000: 3000"
    depends_on:
      - backend
    environment:
      # This is crucial: Frontend needs to know where the backend is.
      # In Docker Compose, service names are resolvable hostnames.
      REACT_APP_API_URL: http://backend: 5000 

  backend:
    build: ./backend
    ports:
      - "5000: 5000"
    depends_on:
      - database
    environment:
      DATABASE_URL: postgres://user:password@database: 5432/crmdb

  database:
    image: postgres: 13-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: crmdb
    volumes:
      - db_data:/var/lib/postgresql/data

volumes:
  db_data:
  • Environment Variables: Pay close attention to how REACT_APP_API_URL for the frontend and DATABASE_URL for the backend are configured. The service names (backend, database) act as hostnames within the Docker Compose network. This is a critical insight for inter-service communication.

  • Persistence: The volumes section for db_data ensures your database state isn't lost when you docker-compose down.

This lesson empowers you with a powerful tool for local development and a fundamental understanding of service orchestration. You're now thinking like a systems architect, not just a coder. Onward to Day 15, where we'll secure our application with environment variables!

State Machine

DOCKER COMPOSE LIFECYCLE IDLE / DOWN STARTING RUNNING FAILED STOPPING docker-compose up Ready (Health OK) Exit Code > 0 docker stop / down Cleanup & Unmount Resolve & Restart restart
Need help?