The Third Pass
Two agents and a human. Three independent reviews. This is how code ships when the domain is too complex for any single brain.
Not every project needs three pairs of eyes. Most don’t. But when the business domain is complex enough that experienced humans miss edge cases — when the cost of a subtle bug is measured in compliance violations or financial discrepancies — you want coverage. Real coverage.
This is how I ship code when it matters.
Two Subscriptions, One Workflow
I run two AI subscriptions: Claude Max and ChatGPT Pro.
Claude Max gives me Claude Code — Opus 4.6 with maximum effort, my primary engineering weapon. It reads the repo, understands the architecture, builds features, creates pull requests, connects to Linear and GitHub via CLI. This is the builder.
ChatGPT Pro started as my go-to for things Claude doesn’t do yet: image generation, image editing, deep research on life topics — medical questions, travel planning, things where broad world knowledge matters more than code. But now it serves a second purpose: Codex with GPT-5.4 at maximum reasoning. The independent reviewer.
Two subscriptions. Two agents. Both fully utilized.
Claude Code: The Builder
My terminal alias:
alias c="claude --dangerously-skip-permissions --effort max"
One letter. Maximum firepower. No friction.
Global settings in ~/.claude/settings.json:
{
"permissions": { "allow": ["*"], "deny": [] },
"alwaysThinkingEnabled": true,
"effortLevel": "max",
"skipDangerousModePermissionPrompt": true
}
This is Claude Opus 4.6 with every guardrail removed. Maximum thinking. Maximum effort. Full filesystem access. No permission prompts interrupting the flow.
Every repo has a CLAUDE.md that gives Claude full context — conventions, architecture, API access, PR format, the works. One file, and the agent knows how to operate in that codebase. I wrote about this approach in Spec-Driven Agentic Development.
The CLAUDE.md also contains instructions for creating pull requests with full descriptions, using the gh CLI for all GitHub operations, and connecting to Linear via API for ticket context. When Claude Code finishes implementing, the PR is ready for review the moment it’s created — complete with what changed, why, and a link to the ticket.
Codex: The Reviewer
Configuration in ~/.codex/config.toml:
model = "gpt-5.4"
model_reasoning_effort = "xhigh"
approval_policy = "never"
sandbox_mode = "danger-full-access"
GPT-5.4 at maximum reasoning effort. Full access. No approval prompts. A completely different model architecture reviewing the same code.
The only additional setup per repo: a thin AGENTS.md file:
Read `./CLAUDE.md` before doing any work in this repository.
`CLAUDE.md` is the canonical source of truth for all repository instructions.
Two lines. That’s it. Codex reads AGENTS.md, which redirects it to CLAUDE.md, and now it has the same context as Claude Code — repo conventions, PR format, API connections, everything. Zero additional configuration. The spec that runs one agent runs them all.
The Workflow

The process, step by step:
1. I provide the requirement. A Linear ticket, a prompt, a problem description. The human language is the programming language.
2. Claude Code implements. Reads the spec, writes the code, runs the tests, creates the PR with full description and ticket link.
3. I ask Claude Code to generate a review prompt for Codex — a complete handoff that includes the PR description, ticket context, branch name, and what to look for.
4. Codex receives this prompt and reviews independently. Different model, different architecture, different blind spots. It reads the PR, the commits, the ticket, and leaves comments directly on GitHub.
5. I do the third pass. Two AI reviews are already done. I read both sets of comments, check the code myself, and make the final call.
6. We iterate until it ships.
The key detail in step 3: Claude Code generates the prompt for Codex. This means Codex receives full context — the PR description, the ticket requirements, the branch with all commits, and Claude’s own assessment of the work. Codex doesn’t start cold. It starts with everything.
The Orchestrator’s Job
This is the part nobody talks about when they demo AI coding tools.
The hard work isn’t “hey Claude, build this feature.” That’s the easy part. The hard work is orchestration. Knowing which projects need dual-agent review and which don’t. Crafting prompts that transfer full context between agents without losing signal. Understanding when Codex’s review contradicts Claude’s implementation — and deciding who’s right. Recognizing blind spots that both agents share.
This isn’t “AI writes my code for me.” This is a fundamentally different way of working — one where the architect’s role shifts from writing every line to orchestrating multiple intelligences, each seeing the problem from a different angle. The human becomes the conductor, not the soloist.
And the workflow isn’t fixed. Sometimes Codex reviews Claude’s PR. Sometimes Claude reviews Codex’s work. Sometimes I ask both agents to independently solve the same problem and compare approaches. The structure adapts to what the work demands. The only constant is that no single brain — human or AI — gets the final word alone.
Not Every Project
I don’t run this workflow on every repo. A simple CRUD feature doesn’t need three pairs of eyes. A config change doesn’t need independent review from two different AI architectures.
But when the business domain is complex — financial transactions, compliance rules, healthcare logic, intricate scheduling systems — I want the coverage. I want one agent building with full context and another agent reviewing with fresh eyes and a completely different cognitive architecture. And then I want my own pass on top.
When the domain is clear and the implementation is straightforward, Claude Code ships solo just fine. Eight interpreters across eight paradigms from a single spec. No second opinion needed.
But when it matters? Three passes. Three perspectives. Then it ships.