From GitHub to Production: How I Deploy My Projects

Share:

When I started sharing projects on GitHub, I quickly realized something important: real software doesn’t stop at git push origin main. What matters is whether that project can run reliably in production, handle updates, and recover from failure.

In this article, I’ll walk through how I usually take a project from GitHub all the way to a live production environment.

Preparing the Repository

A clean repository is the first step. For every project, I make sure to include:

  • README.md - clear instructions on what the project does and how to run it.
  • .gitignore - to keep secrets, local configs, and compiled files out of Git.
  • requirements.txt / pyproject.tml - pinned dependencies for reproducibility.
  • Dockerfile - so the app runs the same way everywhere.
  • docker-compose.yml (optional) - for local testing with services like Postgres or Redis.

This way, anyone (including me on a new machine) can spin up the project in minutes.

Containerization with Docker

Docker is at the center of my workflow. I create a production-ready Dockerfile that:

  • Uses a lightweight Python base image.

  • Installs dependencies.

  • Copies the app source code.

  • Exposes the right port for the web server.

For Django projects, I usually add Gunicorn to handle requests efficiently.

Example snippet:

FROM python:3.12-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["gunicorn", "mysite.wsgi:application", "--bind", "0.0.0.0:8000"]

Deployment Targets

Depending on the project, I deploy to:

  • Dockerized VPS (DigitalOcean, Hetzner, Linode) — gives full control, great for long-term apps.

  • PaaS (Railway, Render, Heroku-style platforms) — quick to set up, good for prototypes.

  • Serverless (Cloud Run, Vercel) — when I need fast scaling without server maintenance.

I prefer Docker on a VPS for more serious apps because it’s flexible and cost-efficient.

Continuous Deployment (CI/CD)

automate deployment with GitHub Actions. Typical workflow:

  1. On every push to main, GitHub builds the Docker image.

  2. The image is pushed to a container registry.

  3. The production server pulls the new image and restarts the service.

This means updates are as simple as:

git push origin main

…and a few minutes later the new version is live.

Handling Data & Persistence

For databases, I use Docker volumes to ensure data isn’t lost when containers restart.

  • PostgreSQL → stored in a persistent volume.

  • Media files → stored in mounted volumes or external services (S3, GCP buckets).

Backups are scheduled at the server level (e.g., daily dumps sent to cloud storage).

Monitoring & Logs

I always set up:

  • Structured logging (so I can filter by error, info, warning).

  • Health checks (via /health/ endpoint).

  • Basic monitoring with tools like UptimeRobot or Grafana (if the project is bigger).

This makes debugging production issues much faster.

Deployment Checklist

Before calling a deployment “done,” I make sure:

  • ✅ Code runs in Docker locally.

  • .env files and secrets are stored securely (never in Git).

  • ✅ HTTPS is enabled (usually via Let’s Encrypt + Nginx).

  • ✅ Backups are running.

  • ✅ Monitoring alerts are active.


Closing Thoughts

For me, deployment is not an afterthought — it’s part of the development process. By treating deployment as seriously as coding, I make sure that my projects are not just “working on my machine,” but actually usable in the real world.

This workflow has helped me deploy ETL pipelines, Django apps, and APIs with confidence — and it keeps improving as I learn from each new project.