The Agentic AI Playbook: Complete Series Guide

Most teams that struggle with agentic AI are not struggling with the tool. They are struggling with the absence of preparation: no shared context, no spec, no agreed domain model, no review discipline. The AI reflects whatever is already true about your process — clearly or noisily, depending on how much thinking happened before the prompting started.

This series is about that preparation. Seven posts, written from an architect’s perspective, covering the full arc from understanding what agentic AI actually is through to running multi-agent teams on complex features. Each post is self-contained, but they build on each other. The foundation posts matter more, not less, as the work gets more complex.


The Series

Part 1: This Isn’t Autocomplete
What agentic AI actually does — and why conflating it with Copilot or ChatGPT is the reason most engineers walk away frustrated after two weeks. Covers the core mental model: context window, tools, memory, skills, MCPs, hooks. Start here if you are new to agentic AI or unconvinced it is meaningfully different from what you have already tried.

Part 2: What Your AI Doesn’t Know (And How to Fix That)
The briefing document your AI needs before it touches your codebase. Covers CLAUDE.md — what it is, what to put in it, how it differs for greenfield vs brownfield work — plus memory, ADRs, and the feedback loop that makes your setup progressively smarter over time.

Part 3: Define Before You Design
Before any spec, any prompt, or any API design: agree what your core business concepts mean. This post explains what an ontology is, why agentic AI makes ambiguous domain models significantly more dangerous, and how to produce one in a few hours rather than a workshop series. The companion piece “Ontology in the Age of AI” goes deeper if you want it.

Part 4: Spec First, Always
The seven-step lifecycle that consistently produces good AI output: requirements, brainstorm, spec, review, plan, implement, verify. Covers the devil’s advocate pattern (making the AI argue against its own design before you commit to it), trade-off analysis as a standing rule, and when the spec overhead is and is not justified.

Part 5: How to Work With It Daily
The practical daily workflow: how to structure prompts as delegation rather than queries, when to iterate vs when to restart, the 80/20 review model that keeps your attention on the work that actually requires human judgement, and the memory habits that stop you repeating the same corrections every session.

Part 6: When One Agent Isn’t Enough
Multi-agent workflows for features that span multiple layers. Covers the two-phase model — a design team (architect, security agent, devil’s advocates) followed by an implementation team (backend, frontend, QA, pentesting, DevOps) — and what it actually means to have a full cross-functional team assembled and working on your feature simultaneously.

Part 7: What Goes Wrong
The predictable failures: starting without context or spec, the speed trap, rubber-stamp review, context window degradation, security as a preparation problem. Includes a practical symptom guide for the most common things that go visibly wrong mid-session. Almost every failure traces back to the same place.


Companion piece

Ontology in the Age of AI: The Foundation Most Architecture Teams Skip
A deeper treatment of domain ontology for architects and senior engineers. Covers the full rationale, a detailed worked example across Customer, Account, User, and Identifier, the five concrete benefits of ontology in AI-assisted delivery, and the failure modes that appear without it. Referenced in Part 3 but written to stand alone.


Where to start

Read in order if you are new to agentic AI or setting up a team from scratch. The foundation builds sequentially: context before spec, spec before implementation, single-agent discipline before multi-agent coordination.

If you are already using agentic AI and hitting specific problems, Parts 5 and 7 are the most immediately useful. Part 7 in particular is worth reading if results have been inconsistent or quality has dropped mid-session.

If you are an architect working on brownfield modernisation, Parts 3 and the ontology companion piece are where the most leverage is.