How Building a Flight Tracker Helped Me Learn CI/CD with Cloud Build and Cloud Run
When my favorite flight tracker started adding intrusive ads, I had a simple thought: maybe I can build my own.
But if I was going to do that, I had a few non-negotiables from the start:
- Low maintenance
- Predictable costs
- Easy to redeploy
- Consistent across projects
- HTTPS, scaling, and deployments handled for me
- An opportunity to learn a concept I hadn’t used before
I didn’t want to babysit servers, manage patching, or reinvent deployment logic every time I had a new idea. After some deliberation, I ended up standardizing on Cloud Build + Cloud Run, and I now use this setup for both my personal site and FlightDeck.
This post documents why I chose this stack, how I wired it together, and what I’d improve next time—partly to share my thinking, and partly so future-me doesn’t forget how everything works.
Why Cloud Run?
Cloud Run checked a lot of boxes for what I wanted out of a runtime:
- Stateless containers
- Scales to zero when idle
- Built-in HTTPS
- No server lifecycle to manage
It lets me deploy containerized apps without thinking about servers, patching, or capacity planning. For personal projects, that tradeoff is exactly what I want.
Why Cloud Build?
Once I decided on Cloud Run, Cloud Build felt like the natural fit.
What I liked about it:
- Deploys on every push
- GitHub-based triggers
- A clear, auditable pipeline
- Native integration with Cloud Run
More importantly, it forced me to think about deployments as a pipeline, not a manual action—which is where a lot of the CI/CD learning happened.
The deployment pattern I standardized on
Both my personal site and FlightDeck follow the same workflow:
- Push code to GitHub
- A Cloud Build trigger automatically runs
- A Docker image is built
- The image is deployed to Cloud Run
Each project lives in its own repository, but the mental model stays the same. That consistency has been one of the biggest wins—it means less context switching and fewer “how did I deploy this again?” moments.
Costs and scaling
One of the biggest advantages of this setup is cost predictability:
- Services scale to zero when idle
- No always-on servers
- Traffic spikes are handled automatically
For personal projects, this makes it easy to leave things running without worrying about surprise bills.
Handling configuration and secrets
This part is still evolving, but my current approach looks like this:
- Application configuration via environment variables
- Sensitive values injected at deploy time
- No secrets hardcoded in the repository
- Local
.envfiles for development only
It’s not perfect yet, but it keeps secrets out of source control and separates build-time from runtime concerns.
What I’d improve next
A few things I’d like to iterate on as these projects grow:
- Separate staging and production environments
- Better-structured secrets management
These weren’t necessary to get started, but they’re logical next steps as my understanding of CI/CD deepens.
Final thoughts
This is how I currently think about deploying personal projects. The setup is intentionally simple, repeatable, and low-ops—but it’s already taught me a lot about CI/CD as a system, not just a buzzword.
I fully expect this approach to evolve over time as I gain more experience—but documenting it now makes that evolution easier to track.