Real situations
These aren't hypotheticals. They're incidents from a real portfolio of 50+ repos, 27 domains, and 30 SaaS subscriptions.
Situation
You launched a small project months ago. It lives on Fly.io with a Supabase backend. You haven't touched it in weeks. Traffic is small but steady -- about 300 visits/day.
At 03:12 UTC something breaks: database connection error. The site starts returning 500 errors. No alerts. No users emailing yet.
You wake up at 09:17 and open analytics. Traffic graph: 300 to 0. The site has been down six hours. You only noticed because you happened to check.
Now you're asking yourself: "How many of my other projects are broken right now?"
System response
Health checks run every 15 minutes across the fleet.
At 03:15 UTC the system sees: HTTP 500 on /health. Severity: P0.
Response: retry check, confirm failure, check deployment logs, restart instance, verify recovery.
If recovery succeeds: incident resolved automatically. Downtime: 3 minutes.
If not: alert sent with logs attached and deploy history linked.
Why it matters
When you run many projects, you stop asking "Is this site working?" and start asking "Which one is broken today?" Most indie hackers don't lose projects because of bad ideas. They lose them because nobody noticed when they broke.
Situation
You're asleep. Three domains expire tonight. One runs on Cloudflare Pages, one on Vercel, one on Netlify.
Normally you'd notice when users start DMing: "Your site shows a security warning." But nobody reports it yet because it's 2am.
System response
The SSL check runs every 15 minutes. Omnus sees the expiry window. Severity: P0.
Actions: attempt auto-renew, verify DNS challenge, re-check certificate, alert if failure.
Two renew successfully. The third fails due to a DNS mismatch.
You wake up to: P0 resolved automatically: 2. P1 needs attention: 1.
Why it matters
SSL expiry doesn't just break a site. It kills SEO trust, API clients, payment callbacks, and user logins. And it always happens at the worst time.
Situation
A project you launched months ago suddenly feels slow. But you run 50 repos, 27 domains, deployments across 4 platforms.
Debugging starts with a question: "Which one broke?"
System response
Every night Omnus runs Lighthouse audits across your fleet. You open the dashboard.
One project shows: Performance 94 to 61. Largest Contentful Paint +1.8s. Bundle size +280kb.
Last deploy added a dependency. A PR already exists: optimize bundle splitting, remove unused dependency.
Why it matters
Performance decay happens quietly. By the time you notice, rankings drop, bounce rate climbs, conversions disappear. Automated audits catch it before users feel it.
Situation
A vulnerability is announced in a package you use everywhere. You have 50 repos. Node projects, Python tools, CLI utilities.
Manual patching means: finding affected repos, updating dependencies, running tests, deploying fixes. This can easily consume an entire day.
System response
Dependency scan detects the CVE. Severity: P1.
Omnus: find affected repos, create patch branches, update lockfiles, run CI, open PRs.
By the time you check GitHub: 8 PRs waiting, 3 already auto-merged.
Why it matters
Security patches are rarely urgent until they suddenly are. Automation turns panic maintenance into routine hygiene.
Situation
Supabase, Vercel, Fly.io, Cloudflare, Claude, OpenAI, Sentry, ngrok, n8n, GitHub. One month you realize your stack costs $452/month.
But nobody shows you where it actually goes.
System response
Once a month you run a simple capture flow. Billing pages get scraped. The dashboard updates.
AI tools $281. Hosting $63. Dev tools $45. Domains $46. APIs $7. Personal $11.
Trend line shows which tools quietly grow.
Why it matters
Infrastructure creep happens slowly. Until you suddenly realize you're paying startup-level bills for hobby projects.
Situation
Classic builder problem. You open a repo and think: "Why did I architect it like this?" Memory is gone.
AI coding agents are even worse. They know nothing about your past decisions.
System response
You run: terso sync. The repo now contains .terso/generated/ with CURRENT_CONTEXT.md, RECENT_DECISIONS.md, SHARED_OPS.md, and an auto-generated CLAUDE.md.
Your AI coding agent reads them automatically. It knows architecture, dependencies, operational context, and decisions from past sessions.
Why it matters
Context loss is the biggest productivity killer in long-running projects. This keeps both you and your AI tools aligned.
Under the hood
P0 auto-fixes. P1 creates PRs. P2 auto-merges if tests pass. P3 batches monthly.
Projects score 11/16 before entering automation. Tests, healthcheck, rollback docs required.
$15/day LLM budget. Critical tasks never deferred. Non-critical queue when over budget.
Ship projects. Don't babysit them.