Skip to main content
← Back to Blogs

Production-like Preview Environments with FastAPI and EKS

Published on October 12, 2025

All code for this is available at fastapi-preview-environments

TL;DR

  • Spin up ephemeral, production-like preview environments for a FastAPI app.
  • Restore a PostgreSQL RDS instance from the latest staging snapshot to seed realistic data.
  • Deploy to EKS via a Helm chart with ALB Ingress, HPA, and External Secrets for DB credentials.
  • Simple CRUD API with health checks, containerized via Docker and served by Uvicorn.

Architecture

  • App: FastAPI + SQLAlchemy + Pydantic
  • Database: RDS PostgreSQL restored from the latest automated snapshot of a staging DB
  • Orchestration: AWS CDK (Python) to discover the latest snapshot and create an instance
  • Platform: EKS + ALB Ingress + External DNS
  • Secrets: External Secrets syncs DB creds into Kubernetes
  • Packaging: Docker image built from python:3.12-slim with uv for installs

Required Kubernetes components:

  • External DNS: Automatically adds DNS records per preview environment
  • ALB Ingress: Provides ingress with support for automated certificates discovery

Here's the flow:

  1. Developer adds preview label to PR
  2. GitHub Actions workflow triggers
  3. CDK provisions a new RDS instance from the latest staging snapshot
  4. Docker image is built and pushed to ECR
  5. Helm deploys the app to a new namespace with the database connection
  6. ALB ingress exposes the app at preview-{PR_NUMBER}.example.com

Code Structure

  • app/: FastAPI application, SQLAlchemy models, Pydantic schemas, DB wiring
  • cdk/: CDK app and stack that restores an RDS instance from the latest snapshot
  • helm/fastapi-preview-environment/: Helm chart with Deployment, Service, Ingress, HPA, ExternalSecret
  • Dockerfile: container image for the app
  • pyproject.toml: dependencies for uv install in Docker

The Components

FastAPI Application

This project contains a standard FastAPI service with a health check that verifies both API and database connectivity.

# app/main.py
@app.get("/health")
def health_check(response: Response, db: Session = Depends(get_db)):
    health_status = {"status": "healthy", "checks": {"api": "ok", "database": "ok"}}
    try:
        db.execute(text("SELECT 1"))
    except Exception as e:
        health_status["status"] = "unhealthy"
        health_status["checks"]["database"] = f"failed: {str(e)}"
        response.status_code = status.HTTP_503_SERVICE_UNAVAILABLE
    return health_status

Containerizing with Docker

The Dockerfile installs dependencies via uv and runs Uvicorn:

FROM python:3.12-slim
WORKDIR /app
RUN apt-get update && apt-get install -y gcc postgresql-client curl \
    && curl -LsSf https://astral.sh/uv/install.sh | sh \
    && rm -rf /var/lib/apt/lists/*
ENV PATH="/root/.local/bin:$PATH"
COPY pyproject.toml .
RUN uv pip install --system --no-cache -r pyproject.toml
COPY app/ ./app/
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Infrastructure as Code with AWS CDK

AWS CDK provisions all resources for the preview environment. For each preview deployment, CDK automatically creates a database instance (separate from production and staging) by restoring the latest snapshot from the staging environment. The stack also configures required infrastructure components like security groups. This pattern can be extended to add more isolated infrastructure (for example, Redis) per preview.

CI/CD with GitHub Actions

  • Trigger: The workflow runs on pull requests labeled preview.
  • Provision: Checks out the code, configures AWS credentials, and runs cdk deploy to create the RDS instance from the latest snapshot.
  • Build: Builds the Docker image, tags it (for example, with the Git SHA), and pushes it to Amazon ECR.
  • Deploy: Runs helm upgrade --install, overriding values such as ingress_host and db_host per PR.

The Good Parts

  • Fast feedback loops: Developers test PRs in isolation without waiting for staging.
  • Realistic data: Testing with production-like data catches bugs that unit tests miss and reveals issues at real data volumes and edge cases.
  • Shareable URLs: Product managers and QA can validate features without running anything locally.
  • Automatic cleanup: When the PR closes, the namespace and RDS instance are destroyed by a cleanup workflow (for example, helm uninstall and cdk destroy).

The Not-So-Good Parts

  • Cost: Each preview environment incurs cost. A db.t3.small RDS instance is roughly $30/month if left running 24/7. For short‑lived PRs (1–2 days), it’s more like $2–3 per environment, but it adds up.
  • Slow initial deploys: The first time you add the preview label, it can take 10–15 minutes to provision the RDS instance. Subsequent pushes are faster since CDK won’t recreate the database.

Conclusion

On a recent team, preview environments helped six backend engineers ship two times more PRs with higher confidence. It felt like overkill at first, but quickly became essential. The combination of Helm and GitHub Actions is standard; the differentiator is per‑branch helm releases and isolated data via RDS snapshots. This approach extends cleanly to other stacks (for example, Next.js or Express).