Why I Stopped Building Tools and Started Building Infrastructure for Myself
Every solo builder eventually becomes a platform engineer for themselves. I just didn't realize it until I was already there.
Every solo builder eventually becomes a platform engineer for themselves. I just didn't realize it until I was already there.
It happened gradually, the way most accidental career changes do. One day you're shipping a side project. The next day you're debugging a deploy script that deploys the deploy scripts. You look around and realize you've spent the last three hours maintaining infrastructure that exists solely to maintain your other infrastructure. Nobody warned you about this part.
Phase 1: Projects
This is the fun part. You have an idea, you build it, you ship it. Maybe it gets users, maybe it doesn't. Either way, you're a maker. You write code that does things for people. The feedback loop is tight and satisfying. You push to main, Vercel deploys it, someone signs up, you feel good.
I did this for years. Built SaaS tools, client projects, internal utilities, experiments. The repos accumulated. 10, 20, 30. Each one its own little world with its own deployment setup, its own database, its own quirks. This was fine. More than fine -- it was the whole point.
Phase 2: Scripts
Then you start noticing the repetition.
Every project needs a deploy command. Every project needs database backups. Every project needs environment variable management. So you write a bash script. Just a quick helper. "I'll clean this up later." You never clean it up. It works, so you copy it to the next project. And the next one.
Now you have a /scripts folder in every repo. Some of them are identical. Some of them diverged three projects ago and you're not sure which version is current. One of them has a hardcoded path to a directory that only exists on your old laptop. You find this out at 11pm on a Sunday when a deploy fails.
Phase 3: Shared tools
The logical next step is extraction. You realize the same patterns keep appearing across projects, so you pull them out. A deploy recipe that works for all your Cloudflare Workers sites. A monitoring script that pings your endpoints. A cost tracking spreadsheet that pulls from Vercel, Supabase, and Cloudflare billing.
This feels productive. You're reducing duplication. You're being a responsible engineer. You maintain these shared tools across projects manually -- updating the deploy recipe when Cloudflare changes something, adding rows to the spreadsheet when you spin up a new service, keeping the monitoring list current.
The deploy cheat sheet grew to 865 lines of markdown. It covered Cloudflare Workers, Cloudflare Pages, Vercel, Fly.io, Netlify, and three different database setups. It was useful. It was also a maintenance burden that nobody was maintaining. Sections went stale. The Netlify instructions still referenced a CLI flag that had been deprecated six months ago. The Supabase section assumed you were on the free tier, which half my projects weren't anymore.
Phase 4: Infrastructure
Here's where it gets interesting. The shared tools start needing their own infrastructure.
The monitoring script needs to run on a schedule, so you set up a cron job. The cron job needs somewhere to run, so you deploy it. The cost tracker needs to store historical data, so you give it a database. The database needs backups. The deploy recipe needs version control and a way to propagate changes to all repos.
You've accidentally built a platform.
I remember the exact moment I realized what had happened. I was writing a healthcheck for my healthcheck system. A cron job that verified my other cron jobs were still running. Monitoring for the monitor. I sat back and thought: I am a platform engineer now. I didn't apply for this job. I didn't want this job. But here I am, and the alternative is letting things break silently.
The realization
I was spending more time maintaining my maintenance tools than maintaining my actual projects.
The cron jobs list needed its own monitoring -- because if the cron scheduler went down, nothing else worked and I wouldn't know. The cost tracking spreadsheet needed its own automation -- because manually updating 30 line items every month meant it was always two months stale. Each project needed the same operational knowledge about how I deploy, how I handle secrets, what my coding standards are -- but I was copy-pasting it across repos and watching it drift.
The maintenance overhead was compounding faster than the project count. Every new project added a row to the monitoring list, a section to the deploy notes, entries to the cost tracker, and another repo where the CLAUDE.md file would go stale within a week. Linear project growth, quadratic maintenance growth.
What infrastructure actually means for a solo builder
I want to be clear about what I don't mean. I don't mean Kubernetes. I don't mean microservices. I don't mean the kind of infrastructure that requires a team of six and a YAML file longer than your actual application code.
For a solo builder running 50 projects, infrastructure means: a single system that does the work you've been doing manually, badly, and inconsistently.
Sixteen cron jobs that run while you sleep. A healthcheck pings every site every 15 minutes. An SSL scanner checks certificate expiry across the fleet. A DNS verifier catches the CNAME that's still pointing to your old Netlify deploy. A dependency scanner flags CVEs across all repos instead of sending you 47 individual GitHub emails you'll ignore. A Lighthouse runner catches performance regressions before users do.
A spending tracker that knows about every subscription. Not a spreadsheet you update when you remember -- a system that tracks $452/month across 30+ services and flags when something changes unexpectedly.
A knowledge pipeline that captures your decisions and makes them available everywhere. When you figure out why the rate limiter fails under load at 2am, that insight gets captured and routed to the right project's context files automatically. You don't have to stop debugging to open a notes app and write it down. You never would anyway.
Auto-generated CLAUDE.md files so your AI coding agents know what they're working on. Not hand-maintained instructions that go stale after the first week -- generated files derived from structured knowledge that update when the truth changes. Change your deployment target once, and every project's agent context reflects it.
The paradox
Building infrastructure for yourself feels like yak-shaving until it works. Your friends ask what you're building and you say "a system that monitors my other systems" and they look at you the way you'd look at someone who built a robot to push the button on their coffee maker.
Then it works. And the feeling is different from anything else I've built.
You go from "I should check if that site is still up" to "the system already checked 15 minutes ago and it's fine." You go from "did my SSL cert expire?" to "the scanner flagged it 28 days before expiry and auto-renewed it." You go from "let me re-explain my entire architecture to Claude Code for the fourth time this week" to "the agent already read the context files and knows everything."
The cognitive shift is profound. You stop carrying the mental inventory of everything that might be broken. You stop doing the morning rounds of checking dashboards and billing pages. The background anxiety of "is something silently failing?" -- that hum you got so used to that you forgot it was there -- goes quiet.
And then you're back in Phase 1. Building things. Shipping them. The fun part. Except now, when you ship project number 51, the infrastructure catches it automatically. Monitoring starts. Context files are generated. Cost tracking picks it up. You don't add a row to a spreadsheet. You don't update a bash script. You don't copy-paste a CLAUDE.md from another repo. You just build.
The takeaway
You don't choose to become a platform engineer. You just keep solving the same problems until you realize you've built a platform. The question is whether you do it deliberately or accidentally.
I did it accidentally for years. Scripts scattered across repos, a deploy cheat sheet nobody maintained, a spreadsheet that was always stale, monitoring that monitored nothing. Each tool solved one problem and created half a new one. The total overhead grew faster than the benefit.
Then I did it deliberately. One system. One database. One set of cron jobs. One knowledge pipeline that captures everything, classifies it, and routes it where it needs to go. The deliberately-built version replaced dozens of scripts, spreadsheets, and manual processes with something that actually works at portfolio scale.
The second version is much better.
It's also, I should note, its own project now. With its own deploy pipeline, its own monitoring, its own cost line item. The platform engineer's work is never truly done. But at least now when I open a new repo and start building something, I'm building the thing I actually want to build. The infrastructure handles the rest.
That's the whole point. You build the infrastructure so you can stop being an infrastructure engineer and go back to being a maker. The irony is that building it is some of the most satisfying engineering I've ever done.