16 Cron Jobs That Quietly Run My Entire Developer Life
Every 15 minutes, bots check whether my entire portfolio is still alive. I used to do this manually. Now I sleep.
Every 15 minutes, a bot checks whether my entire portfolio is still alive. I used to do this manually. Now I sleep.
I run 50+ projects and 10+ deployed sites. They are spread across Vercel, Cloudflare Workers, Cloudflare Pages, and Fly.io. Different databases, different domains, different deployment pipelines. Checking them manually isn't hard -- it's impossible. So I stopped trying and built 16 cron jobs that do it for me.
They run on Vercel Cron. They are boring. They work.
The infrastructure layer
These jobs keep the lights on. They don't do anything clever. They just catch things that would otherwise break silently.
1. Healthcheck (every 15 minutes) -- Hits every site's health endpoint, checks for a 200 response, measures latency. If a site is down, I get a Telegram notification within 15 minutes. Before this existed, I found out about downtime from users. An SSL cert expired on one of my sites and nobody told me for 11 days. Never again.
2. SSL expiry scanner (daily) -- Checks certificate expiry dates across every domain. Flags anything within 30 days, escalates within 14. Most certificates auto-renew, but "most" is not "all." Let's Encrypt renewals fail silently when DNS isn't configured right. This catches the ones that slip through.
3. DNS verification (every 12 hours) -- Resolves DNS records and compares them against expected configuration. Catches the CNAME still pointing to old Netlify after you moved to Vercel, the A record you forgot to update, the MX record that vanished when you transferred registrars. It has caught three real misconfigurations so far.
4. Dependency scanner (daily) -- Audits dependencies across all repos for known CVEs. Instead of GitHub's 47th Dependabot email I'm going to ignore, I get a single prioritized list classified by risk level. Results feed directly into the campaign executor.
5. Security audit (monthly) -- Checks security headers (CSP, X-Frame-Options, HSTS), scans for exposed paths (/.env, /.git/config, /wp-admin), verifies nothing sensitive is publicly accessible. Runs monthly because these things don't change often.
6. Lighthouse audit (weekly) -- Runs performance audits against every site. The key metric is regression, not absolute score. If a site drops more than 10 points between runs, something broke. Has caught two real regressions that would have gone unnoticed for weeks.
7. Cloudflare analytics sync (daily) -- Pulls 7-day traffic data from the Cloudflare GraphQL API: page views, unique visitors, bandwidth, cache ratio. Not about alerting -- about awareness. A site going from 200 daily visitors to 20 means something changed. A side project jumping to 2,000 means it hit a nerve.
8. Sentry sync (daily) -- Scans for error spikes across all projects. Catches the scenario where a deploy introduces a bug that throws errors for 10% of users but doesn't bring the site down. The healthcheck passes, but users are suffering.
The knowledge layer
These jobs manage the system's memory. Every piece of knowledge I capture -- voice notes, text messages, webhook events, CLI pastes -- flows through this pipeline.
9. Process queue (every 2 minutes) -- The workhorse. Picks up queued ingestions and runs them through: secret scan (22 regex patterns, BEFORE any LLM call), Gemini Flash segmentation, classification, deduplication (hash + fuzzy + semantic), and OpenAI embedding. Processes up to 5 items per batch. No persistent worker, no message queue -- just a function that runs every 2 minutes.
10. Nightly consolidation (daily) -- Merges related fragments into canonical knowledge items. Five fragments about "how I deploy to Cloudflare Workers" become one authoritative document. This is where Claude Sonnet earns its keep -- consolidation requires reasoning that Gemini Flash can't do.
11. Weekly patterns (Mondays) -- Analyzes captured knowledge for recurring themes. If I keep debugging the same error type across projects, that's a pattern worth extracting into a shared procedure. Speculative work, but the ones that land save real time.
12. Stale cleanup (daily) -- The garbage collector. Every piece of captured data has a TTL: promote_now is permanent, hold_for_review gets 90 days, short_ttl_exhaust gets 30 days, discard_after_processing gets 7 days. Not promoted within the window? It disappears. This prevents the failure mode of every note-taking system: the pile grows until it's unsearchable.
The intelligence layer
13. Signal scraper (daily) -- Monitors HN, Product Hunt, and Google Trends for signals relevant to my portfolio and interests. Ingested through the same pipeline as everything else.
14. Opportunity scorer (daily) -- Takes scraped signals and scores them against my skills and current portfolio. The venture studio layer -- still early, still rough, but it surfaces opportunities instead of letting them disappear into feeds I'll never read.
15. Morning briefing (daily) -- Generates a daily summary via Telegram. Portfolio status, open incidents, overnight activity, items needing review. I read it with coffee. It replaced logging into dashboards.
16. Weekly digest (Fridays) -- The full rollup: incidents resolved, PRs merged, knowledge promoted, patterns detected, performance regressions, dependency updates. The "state of the fleet" report.
The severity system
When the dependency scanner finds a CVE across 8 repos, it creates a campaign. The campaign executor works through each affected repo -- creates a branch, bumps the package, runs tests, creates a PR. What happens next depends on severity:
- P0 (critical): Auto-fix and deploy immediately. Alert me after. A remote code execution vulnerability sitting in production doesn't need me in the loop.
- P1 (high): Create a PR and notify me via Telegram. I review and merge.
- P2 (medium): Create a PR. If tests pass, auto-merge. I find out in the weekly digest. Most routine bumps land here.
- P3 (low): Batch into the monthly report.
This is the most important design decision in the whole setup. Without severity levels, every issue demands equal attention, which means nothing gets proper attention.
The readiness contract
Early on I pointed this automation at every repo. Mistake. Half had no tests, no healthcheck endpoint, no documented rollback. Auto-creating PRs for projects that can't verify the fix is worse than doing nothing.
Now there's a 16-point readiness scorecard: tests exist, healthcheck exists, CI passes, rollback is documented, environment parity is verified. A project needs 11/16 to enter the automation system. Below that, it stays manual.
Automation without readiness contracts is just chaos that moves faster.
What it costs
All 16 cron jobs run on Vercel Pro ($20/month). LLM calls are capped at $15/day, $200/month -- Gemini Flash for volume work, Claude Sonnet for synthesis, OpenAI for embeddings. If the budget runs hot, deferrable tasks get delayed; critical tasks never do. Total infrastructure is under $100/month including Supabase. That's less than one seat on most monitoring platforms.
The point
The dream isn't zero maintenance. That doesn't exist.
The dream is automated maintenance with human judgment at the edges. The system handles the 95% that is repetitive and predictable. I handle the 5% that needs context and taste.
Sixteen cron jobs, a severity system, and a readiness contract. It's a Next.js app with functions that run on a schedule. No event sourcing, no Kafka, no service mesh.
But every morning I check Telegram and know the state of everything I've built. Nothing is silently broken. Nothing is silently expiring. Nothing is silently vulnerable.
I just build.