You will learn a practical path to run a simple Flask web app that increments a hit counter stored in a Redis container. This docker compose local development setup keeps your services consistent with production so you can test real behavior fast.
When you define your stack with docker compose, you get a single command to start multiple containers, networks, and volumes. That control helps you focus on code and not on repetitive commands.
Remember: every file in your project directory goes to the daemon when an image is built. Adding a .dockerignore file cuts build time and avoids shipping unnecessary files.
This short guide shows how the Flask app, a Redis service, and shared volumes work together. You’ll see how to set environment variables, expose ports, and watch logs so changes feel immediate.
Understanding the Value of Containerized Development
Running services in isolated containers ensures your application behaves the same on every machine. That predictability saves time when you troubleshoot or onboard teammates.
Official images for MongoDB and Redis give you ready-to-run databases without installing server packages on your host. They keep your local development environment clean and reproducible.
- Using a container for your database isolates data and avoids conflicts across projects.
- When you use docker compose to manage services, you can define networks so containers talk to each other securely.
- Named volumes preserve data between runs so you don’t lose important information when you stop services.
- A container-first workflow lets you share a single configuration file and recreate the same environment on any machine.
- Modern applications often run multiple services; containers make that multi-service stack consistent and reproducible.
This approach reduces “it works on my machine” surprises. It also makes it easier to set local variables and test the full application stack before you push changes.
Essential Prerequisites for Your Environment
Before you run any services, confirm your machine has the essential tools installed to manage containers and repositories. This ensures a predictable development environment and faster troubleshooting.
Required Software
Install Docker Desktop and Git so you can build images and track code changes. Use an IDE like VS Code to edit files and run commands from the same window.
Project Structure
Create a dedicated project directory to keep your Dockerfile and other configuration file assets separate from source code. A clear layout reduces mistakes and speeds onboarding.
- Use root folders such as notes-service, reading-list-service, and yoda-ui to mirror a microservices layout.
- Keep config files at the repository root so tools find them without extra paths.
- Follow a standard directory layout so other developers can read your code and set local environments quickly.
Building Your First Docker Compose Local Development Setup
Define one YAML file that lists every service your application needs. That compose file declares networks, named volumes, and how each service starts.
When you run the command docker compose up, the engine builds images as needed and starts all containers in order. This single command saves you many manual commands and keeps the stack consistent across machines.
- Create a versioned file that defines services, ports, and volumes for your development environment.
- Map a host port to the container so you can open the application in your browser at localhost.
- Use ADD or COPY in your Dockerfile so the directory add following instruction copies source code into the image built for your app.
- Commit one shared compose version so every developer runs the same stack and shares the same data paths.
Start with this simple step, then iterate: add health checks, tweak networks, and track logs while running your services.
Managing Configuration with Environment Variables
Keep sensitive settings out of your YAML and code by moving them into a separate environment file. This approach makes it easy to change hosts, ports, and keys without touching the main compose file or rebuilding an image.
Keeping Secrets Secure
The .env file lets you store values like REDIS_HOST and REDIS_PORT outside the repository. Your system reads that file automatically and interpolates values into the compose file at runtime.
Separate configuration from your code so you can set different variables per environment. That reduces the risk of committing secrets into version control and keeps your final image free of credentials.
- Use a .env for credentials and service endpoints to avoid leaking secrets.
- Define environment variables in the compose file to wire your application to the correct container and port.
- Change settings without editing the Dockerfile by updating a single file for each environment.
Implementing Health Checks to Prevent Startup Races
A simple health check can stop race conditions that crash your app during startup.
A startup race happens when your web application tries to connect to a service before that container is ready. Add a healthcheck block to the compose file so the running container is marked healthy only after it answers a quick test.
For Redis, the test uses the command redis-cli ping. That command runs inside container and confirms the database accepts requests before the application starts. You avoid crashes that come from premature connections.
- Adjust interval, timeout, and retries so the service has time to boot.
- Every healthcheck step is logged, giving you visibility into failures.
- When a container fails health checks, dependent services wait or restart automatically.
Implement the block in your compose file next to the service definition. This small step makes the stack more stable and keeps your development workflow predictable.
Accelerating Workflow with Compose Watch
A watch mode saves you time by syncing edits from your editor into the running application instantly.
Add a develop:watch block to your compose file so your project directory mirrors into the running container. This sync action pushes code changes without rebuilding the image each time.
Syncing Code Changes
When you edit a file, the sync+restart process refreshes the service so the application shows updates immediately. You keep your browser open and validate UI or API changes in seconds.
Rebuilding Dependencies
The watch block watches dependency files like requirements.txt. When that file changes, a rebuild triggers automatically so dependency installs stay current in the environment.
- Mount code as a volume to avoid full image rebuilds for small edits.
- Let the watch action restart only the affected service to save time across the stack.
- Use the command logs to confirm when changes are synced or when a rebuild runs.
Ensuring Data Persistence with Named Volumes
Named volumes keep your critical database files safe when containers stop or are removed.
Define a named volume such as redis-data in your compose file to map a path inside the container to persistent host storage. This ensures the Redis service keeps its state even after you bring the stack down.
Without persistent volumes, any data written inside a running container is lost when you remove that container or rebuild an image. Using named volumes decouples your data lifecycle from the container lifecycle so you can restart, update, or replace a container without losing state.
- Named volumes let your database survive container restarts and crashes.
- Declare volumes in the compose file to map container paths to host storage.
- To remove volumes and clear stored data, run
docker compose down -v.
This small step makes your application stack more reliable and gives you a predictable environment for testing code and networks.
Modularizing Your Stack with Include
Split your stack into focused files so each team owns a clear piece of the configuration.
Splitting Services Across Files
The include top-level element lets you reference multiple YAML files, such as infra.yaml and compose.yaml. This keeps your main compose file small and readable.
Store infrastructure services in an infra directory and application services in a separate file. When you include an external file, all services join the same default network and can reach each other by service name.
- Use separate files so teams manage their services without touching unrelated code.
- Place volumes and network definitions in infra.yaml for clear separation of concerns.
- Keep image and service overrides in the application file to speed iterative changes.
This modular approach scales well as applications grow. It reduces merge conflicts, improves clarity, and keeps configuration manageable across teams and environments.
Validating and Inspecting Your Running Services
A quick configuration check gives you a clear picture of how services, volumes, and networks will behave. Run a single command to print the fully resolved configuration and confirm that variables and file merges match your expectations.
Use docker compose config to see the final compose file with interpolated environment values. The output shows service definitions, default network and volume names, and the image and port mappings for each container.
Stream logs from any running container with docker compose logs -f to watch startup events and runtime errors in real time. This streaming view helps you trace issues to a specific service or code change.
- Verify merged files and variables before you start dependent services.
- Confirm volumes and data paths to avoid accidental data loss.
- Watch logs to catch misconfigurations early and keep control of your stack.
- Use the config output to audit default names and assigned networks.
As a final step, treat these commands as part of your routine. They give fast visibility into the environment and reduce surprises when you push changes or scale services.
Debugging Applications Inside Containers
Debugging inside the container gives direct access to the exact environment your application uses. This access helps you confirm the image built and the active configuration without guesswork.
Streaming Logs
Start by streaming logs from services to catch errors as they happen. Seeing the response from your application in real time helps you pinpoint failing code paths.
- Follow logs to watch startup events and runtime errors.
- Filter output for a single service to reduce noise and save time.
- Use timestamps to correlate logs with recent changes or deploys.
Executing Commands
When logs are not enough, run commands inside a running container. Use a shell session to inspect files, check environment variables, and run test commands against connected services.
- Open an interactive shell so you can run diagnostics without stopping the service.
- Run env to verify environment variables and confirm config values used by the application.
- Test database connectivity from the container to validate service discovery and data paths.
- Inspect files that the application reads to confirm the correct version and permissions.
Conclusion
A clear process for building images and wiring services makes everyday work faster and less error-prone. Mastering docker compose helps you keep the same image and file behavior across machines so your application runs predictably.
Use named volumes and health checks to protect data and avoid startup races. Split large files with include so teams can change pieces without breaking the whole stack.
When something fails, debug inside container with the right command and logs to find the issue quickly. These practices let you build, ship, and run applications with confidence while keeping configuration clean and maintainable.
Spencer Blake is a developer and technical writer focused on advanced workflows, AI-driven development, and the tools that actually make a difference in a programmer’s daily routine. He created Tips News to share the kind of knowledge that senior developers use every day but rarely gets taught anywhere. When he’s not writing, he’s probably automating something that shouldn’t be done manually.



