Field Notes
Observations on how AI systems behave, where they break, and what to build instead.
What Breaks
Why Your AI Works in Dev and Breaks in Production
Why systems that seem stable during development fail under real-world conditionsWhen AI Gets It Wrong—and Sounds Right
How confident errors create invisible risk
↓
What That Means
What Do You Mean by “Done”?
Why unclear completion criteria lead to unreliable outcomesHow AI Speeds the Spread of Zombie Ideas and Orphan Decisions
How incorrect or unsupported ideas persist and propagate
↓
Why It Happens
The Invisible Work Holding AI Together
The hidden human effort required to keep AI outputs usableYou Don’t Have a Prompt Problem—You Have a Memory Problem
Why missing continuity—not prompting—is the real constraint
What to Build
Not Smarter — More Stable
Why stability—not intelligence—is the real goal for AI systems
↓
What to Build—Applied
How to Audit Reasoning in AI Responses (A Practical Checklist)
What to look for when AI reasoning is not externally stored
→