Source: Riley Tomasek (@rileytomasek) | Date: March 2026
Core Thesis
Agents complete tasks. Daemons fulfill roles. The name borrows from Unix: a long-lived background process that maintains system health. AI daemons do the same for engineering teams, continuously watching for conditions and acting without a human prompt.
Agents help teams ship faster, which also means more operational debt, faster. Stale PRs, drifted issue metadata, untriaged errors, rotting documentation, aging dependencies. Each task is too small for an engineer to prioritize individually. The rational move for every engineer is to skip it. The result is a tragedy of the commons.
The Task vs. Role Distinction
This is the core framing that separates daemons from agents:
| Property | Task (Agent) | Role (Daemon) |
|---|---|---|
| Duration | Discrete | Continuous |
| Initiated by | A human prompt | The environment |
| Done when | Deliverable ships | Never |
| Example | “Fix this bug” | “Keep PRs mergeable” |
Agents need a human to notice a problem, frame the task, and prompt the agent each time. Daemons watch for conditions and act on their own. The hundredth action costs the team nothing in attention.
Why Agents Alone Are Not Enough
Agents multiply output. Nobody multiplies the maintenance. More code means more docs to keep current, more issues to triage, more PRs to keep mergeable, more dependencies to patch.
You can ask an agent to fix operational debt. But someone still has to:
- Notice the problem exists
- Frame it as a task
- Prompt the agent
- Repeat for every occurrence
The debt scales with the output. Human attention does not. This is the gap daemons fill.
Daemon Characteristics
Three properties define a daemon:
Persistent. Always running on cloud infrastructure. Accumulates context over weeks and months. Gets sharper the longer it runs because it builds a model of your team’s conventions.
Self-initiating. A PR is opened, CI fails, a Sentry alert fires. The daemon observes a condition and acts. No human prompted it. It reacts through configured watch conditions, not human instructions.
Role-based. Each daemon has one job with a clear boundary. Narrow scope means predictable behavior and output your team can trust. You define the role once (what it watches, what it does, what it can’t do) and the daemon handles it from there.
Concrete Daemon Roles
Each daemon works in existing tools (GitHub PRs, Linear issues, Slack threads). No new dashboard required.
PR Helper
Before: PRs sit for days. Descriptions drift from the diff. Merge conflicts pile up. CI fails on lint and formatting.
After: Conflicts get resolved. Failing checks get fixed. Descriptions match the diff. Humans review clean PRs.
Project Manager
Before: Issues have wrong status, labels are missing, priorities get stale. Planning starts with archaeology.
After: Metadata reflects reality. Blockers surface before standup. Planning becomes data-driven.
Bug Triage
Before: Error tracker alerts get ignored or dismissed. Same errors recur. No root cause analysis.
After: Every error triaged within minutes. Recurring patterns connected. Issues arrive with root cause analysis and reproduction context.
Codebase Maintainer
Before: Dependencies age. Security patches pile up. Minor bumps become major migrations.
After: Dependencies stay current. Security patches applied promptly. PRs arrive tested and ready.
Librarian
Before: Docs drift within weeks. READMEs describe APIs that no longer exist.
After: Docs stay current. Stale content gets caught and updated. New engineers read docs that match the code.
The Flywheel Effect
Daemons compound in four ways:
Zero marginal attention. The hundredth action costs the team nothing. No human noticed, framed, or prompted it.
Learn norms. Over weeks, the daemon builds a model of your team’s conventions: labeling schemes, review preferences, escalation patterns. It stops needing correction.
Earn trust. The daemon file is a spec in your repo. The team tunes it like any other config: tighten a threshold, add a deny rule, narrow the scope. Predictable behavior earns autonomy.
Reinforce each other. Clean PRs mean accurate project data. Triaged bugs mean targeted fixes. More daemons, better signal. Each daemon’s output is input for the others.
Eventually, you forget they’re running. That is the daemon working.
Where Daemons Fit in the Landscape
Daemons are not a replacement for coding agents. They fill the gap agents leave behind.
| Type | Scope | Initiated by |
|---|---|---|
| Local agent (Claude Code, Codex CLI) | Isolated | Human prompt |
| Cloud agent (Devin, Codex) | Shared | Human assignment |
| Daemon | Shared | Environment event |
Agents excel at novel work: building features, designing APIs, refactoring modules. The problem is different every time, so a human frames it. Daemons excel at ongoing work: the judgment is real, but the pattern is known and a daemon file can encode it once.
Use agents to build. Use daemons to maintain what you’ve built.
Connection to Existing Patterns
Daemons connect to several patterns in this knowledge base:
- 24/7 Development Strategy covers agents working during off-hours, but still human-initiated via ticket queues. Daemons remove the initiation step entirely.
- CI/CD Agent Patterns describes agents triggered by CI events. Daemons generalize this: any environment event, not just CI.
- Long-Running Agent Patterns covers shell, skills, and compaction for extended runs. Daemons are the logical next step: agents that never stop running.
- Quality Gates as Information Filters describes gates that filter signal from noise. Daemons can own and enforce those gates continuously.
- Learning Loops encodes problems into prevention. Daemons are the runtime that executes those prevention loops perpetually.
Key Takeaway
Keeping a codebase healthy is not a task you finish. It is an ongoing role. On most teams, nobody fills it. Daemons do, without adding to anyone’s workload. Agents create work. Daemons maintain it.

