The Six Pillars
The six architectural shifts that separate an AI-first platform from a domain system with AI features bolted on. Not invented — noticed, after enough repetitions that the pattern was no longer deniable.
The framing that took me too long
For a while I thought the difference between the platforms that kept improving and the ones that plateaued was execution — better engineering, cleaner code, tighter loops. It wasn't. Or at least, that wasn't what made the difference when I compared the platforms honestly.
The difference was architectural. The platforms that compounded had all made the same six shifts — usually quietly, often by accident, sometimes because the first thing I'd built stopped scaling and I'd had to rewrite it. The platforms that plateaued were the ones where I'd kept reaching for familiar patterns from web application work and trying to make them fit.
The shortest version of what I now believe: most AI platforms today are domain systems with AI features. The target is an AI system with domain projections. The six pillars are what changes when you take that sentence seriously.
Pillar 1 — Event sourcing replaces CRUD
Instead of storing the current state and overwriting it on every change, store every action as an immutable event. State becomes a projection over the stream.
I wrote about this at length in the Event Sourcing module — the short version is that CRUD erases the delta between what the AI proposed and what the human chose, which is exactly the signal an AI platform needs to learn from. Without events, you end up rebuilding the training corpus from logs that weren't designed to be a training corpus. I've tried that. It's awful.
Pillar 2 — Feedback as the primary interaction
The primary user interaction is not data entry. It's deciding on AI proposals — accept, modify, or reject. This isn't a UX choice; it's a thermodynamic necessity, which I wrote about in the Demon Principle and the shape of the loop in Triple Output.
The practical consequence is that the UI budget shifts. Forms and CRUD screens shrink. The feedback surface — where the human reviews what the AI proposed — is where the design attention now sits. On the platforms where I got this right, the feedback surface is the most polished thing on the screen. On the ones where I got it wrong, it's an afterthought bolted onto a traditional admin UI.
Pillar 3 — Skills over controllers
Agent capabilities live in markdown, not in code. Adding a capability is writing a recipe, not shipping a service. The meta-agent can read and edit skills the way a developer reads and edits code. I wrote about this in the Skills over Controllers module; the line I've learned to hold is that if a skill needs code to work, it isn't a skill — it's a harness primitive, and it should live there.
Pillar 4 — Context engineering
The primary design discipline isn't data modelling. It's context assembly. For any decision the AI has to make, the real question is: what information should be in the context window, how should it be ranked, how should it be compressed to fit? I arrived at this the hard way, through a period where I was spending as much time on prompt engineering as on anything else and the results were still unreliable — because the model wasn't the problem, the context it had been given was.
Three tiers turn out to be the shape that works: hot (session state, <10ms), warm (vectorised history, summaries, <100ms), and cold (full event store, <500ms). Context assembly pulls from all three, ranks, and compresses. I'll write the full module on this when I get to it — it deserves its own room.
Pillar 5 — Projections over features
Features have linear cost — each one is a new endpoint, a new table, a new deploy cycle. Projections have logarithmic cost — each new capability is a new query over the same event stream, and every new event type enriches every existing projection for free.
This took me the longest to trust, and I think it's the single most valuable shift. The first time I replaced what would have been a four-week feature with a projection I wrote in an afternoon — and it was better because it used data the feature couldn't have seen — I stopped arguing with the pattern. A separate module for this one too, in due course.
Pillar 6 — Boundary skills
The platform's reach extends beyond its own walls. Intelligence flows in (email arrives, forms get submitted, webhooks land) and out (documents generated, notifications sent, reports filed). Every boundary crossing is an event, which means every boundary crossing is training data.
Most of my earlier platforms treated the boundary as something to shove data through as quickly as possible. The pattern that worked better was treating each boundary as a skill in its own right — a recipe for how intelligence transforms when it crosses the edge — with the same feedback loop as any other skill. Another module to follow.
What the pillars change economically
The pillars are not aesthetic preferences. They change the cost structure of the platform. The cost to add a capability falls from days to hours. The cost to add a whole vertical — a new domain, a new tenant type — falls from weeks to configuration. The marginal cost of the Nth capability goes from linear to logarithmic. The value of the Nth user interaction goes from constant to increasing, because traces compound.
None of the pillars are interesting in isolation. What makes them load-bearing is that they compose — event sourcing is what makes feedback-as-interaction tractable, feedback is what makes projections meaningful, skills are what make context engineering addressable, and so on. Remove any one and the others degrade.
The rest of this playbook is the detail of each pillar. If you only read one other module, start with whichever one describes the thing you've been trying to force into a traditional shape and wondering why it keeps fighting you.
Want to think through how this lands in your project? Tell kr8 what you’re working with.
// Keep reading the playbook?