A principle emerging from agentic coding: verification steps that pass should produce no output; failures should dump everything. Simple idea, but it matters more than you'd expect when an LLM is the one reading the output.
Jamon Holmgren's Night Shift workflow separates human thinking from agent execution — specs by day, autonomous implementation by night. The core insight: your time is the expensive resource, tokens are cheap. Start from a draft PR, not from zero.
Tried VoiceBox, a free open-source voice cloning app that runs locally. Cloned my voice in 30 seconds from a short audio sample — surprisingly good, even with a Scottish accent.
Claude Code has official LSP plugins for code intelligence. Easy to miss if you haven't gone looking.
Claude is assuming my codebase and dependencies are always fresh. I'm experimenting with a new CLAUDE.md decision-making heuristic. I'm trying to nudge it to check before wasting a bunch of time on debugging cycles.
Daniel Griesser's custom sub-agent workflow for context management was ahead of its time — what he hand-rolled is now being shipped as first-class tooling. The core insight still stands: context is precious, and managing it intentionally is everything.
Matt Pocock argues your codebase is the biggest influence on AI output, not your prompt. His solution is deep modules from John Ousterhout's A Philosophy of Software Design. It maps perfectly to how CQRS already works — each service boundary is a deep module with commands and queries as its interface.
Matt Pocock explains why subagents are the dominant pattern for coding agents, keeping work in the first 40-50% of the context window where models perform best.