Last login: Mon Apr 13 16:41:12 on ttys002
belief system ready.
type 'ls' to inspect the directory.
type 'cat belief.01' to open the first belief.
type 'help' for the supported commands.
commands: ls, cat belief.01, cat FINAL_DIRECTIVE.md, cat known_limitations.md, next, prev, clear
shortcuts: up/down history, left/right paginate, ? help, esc clear
Newsletter
Become a better AI engineer
Weekly deep dives on production AI systems, context engineering, and the patterns that compound. No fluff, no tutorials. Just what works.
Join 306K+ developers. No spam. Unsubscribe anytime.
belief system ready.
type 'ls' to inspect the directory.
type 'cat belief.01' to open the first belief.
type 'help' for the supported commands.
belief.01
belief.02
belief.03
belief.04
belief.05
FINAL_DIRECTIVE.md
known_limitations.md
Supported commands:
ls
cat belief.01 ... cat belief.05
cat FINAL_DIRECTIVE.md
cat known_limitations.md
next / prev
help
clear
[ BOSS BELIEF / THE META-TRUTH ]
AI development is an optimisation problem under constraints. Not a programming problem.
everything above is that one sentence, applied five different ways.
[ SIDE QUEST / WHERE THE HARNESS LEAKS ]
limit.01
Scoring functions can be wrong
When a loop optimises against the wrong metric, the agent gets better at the wrong thing. I don't always catch it on the first run.
limit.02
The harness can become its own rabbit hole
Building scaffolding is load-bearing. It also compounds complexity. Knowing when to stop building the factory and start building the product is a judgment call I keep recalibrating.
limit.03
Constraints catch bugs, not bad decisions
Types and lint rules prevent a class of errors. They do nothing for a flawed product bet, a wrong abstraction, or a misread of what the user actually needs. Taste still matters.
limit.04
The model keeps moving
Patterns that worked on a previous model iteration don't always transfer. Everything here is a snapshot of what I currently believe, not a fixed doctrine.
[ BELIEF 01 / STOCHASTIC ]
AI is stochastic, not intelligent
> It doesn't know. It samples. Quality is inconsistent across runs.
without the constraint
- Trust a single pass
- Ship what the first prompt returns
- Variance you can't see
with the constraint
+ Assume outputs are wrong by default
+ Wrap every generation in tests, invariants, and evaluators
+ Re-run and diff until the harness is green
how this shows up in my work
· Named invariants like INV-BILLING-008 that every run must satisfy
· Property-based tests on generated code
· Pre-commit gates that block regressions before they reach main
use 'next' or 'prev' to paginate.
[ BELIEF 02 / CONSTRAINTS ]
Constraints create reliability
> The tighter the constraints, the smaller the solution space, the more consistent the output.
without the constraint
- Free-form prompts
- Unbounded search space
- Works once, breaks twice
with the constraint
+ Type systems, schemas, lint rules, invariants
+ Shrink the space the model can explore
+ Consistency by construction
how this shows up in my work
· TypeScript strict mode and Zod schemas at every boundary
· Custom ESLint rules that encode project conventions
· Declarative constraints over imperative instructions
use 'next' or 'prev' to paginate.
[ BELIEF 03 / FEEDBACK ]
Feedback loops beat one-shot prompting
> Quality comes from iteration, not from a better first prompt.
without the constraint
- Generate once, hope
- No way to score the output
- Debugging is vibes
with the constraint
+ Generate, evaluate, correct, repeat
+ A closed-loop harness with a real scoring function
+ Every run produces signal you can read
how this shows up in my work
· The RALPH loop running specs end-to-end against named invariants
· Actor and critic pairs. One agent writes, one reviews
· Scheduled loops against invariants so bugs get fixed overnight
use 'next' or 'prev' to paginate.
[ BELIEF 04 / ENVIRONMENT ]
Environment design beats prompt engineering
> The system around the model determines the outcome. Not the prompt.
without the constraint
- Tuning wording forever
- Flaky state, flaky results
- Every run starts from zero
with the constraint
+ Build the harness
+ Isolate state
+ Define inputs and outputs
+ Control execution
how this shows up in my work
· Custom git worktree scripts that spin up a fresh API worker and web UI per branch with md5-hashed ports
· Per-worktree Postgres schemas on a single shared container. No ten-container local stack
· Simple bcrypt auth so the test harness has no external dependencies
use 'next' or 'prev' to paginate.
[ BELIEF 05 / THROUGHPUT ]
Throughput scales through parallel, isolated systems
> You don't scale AI by making it smarter. You scale it by running more of it safely.
without the constraint
- One agent, one branch, one database
- Serial work, shared state, flaky tests
with the constraint
+ Git worktrees per task
+ Isolated schemas per worktree
+ Parallel agents that can't collide
how this shows up in my work
· Factory functions (createUser, createOrg, signInUser, createData) make every test self-contained
· Global teardown per run. Zero flake, infinite tests
· A new Postgres table triggers a new factory, so parallelism stays cheap
use 'next' or 'prev' to paginate.