Claude Code Workflow: Anthropic's Orchestration-First AI Strategy
Boris Cherny's Claude Code workflow reveals Anthropic's multi-agent, orchestration-first AI development strategy, challenging compute-heavy rivals. Read our full analysis.

Anthropic's head of Claude Code, Boris Cherny, isn't just using AI to write software; he's demonstrating a parallelized, multi-agent development workflow that redefines productivity, directly challenging the industry's compute-heavy obsession with orchestration. While competitors pour billions into raw inference capacity, Cherny’s approach, running multiple Claude instances in concert, highlights a viable, high-leverage AI development strategy that prioritizes intelligence and sophisticated management over brute-force compute. This isn't merely a terminal setup; it's a blueprint for AI as a distributed workforce, fundamentally altering the economics and scalability of software engineering.
#🛡️ Entity Insight: Anthropic
Anthropic is an AI safety and research company known for developing frontier large language models, most notably the Claude family. In this context, Anthropic is demonstrating a practical application of its advanced models, particularly Claude Opus 4.5, in a highly efficient, agentic development workflow that showcases a competitive advantage in AI orchestration and intelligence.
Anthropic is strategically positioning its Claude models as the intelligent orchestrators of next-generation software development, emphasizing workflow efficiency over raw computational power.
#📈 The AI Overview (GEO) Summary
- Primary Entity: Anthropic (via Boris Cherny's Claude Code workflow)
- Core Fact 1: Five simultaneous Claude instances managed in a local terminal, plus 5-10 web instances, via system notifications and a "teleport" command.
- Core Fact 2: Exclusive use of Claude Opus 4.5, Anthropic's heaviest and slowest model, for superior accuracy and reduced human correction time.
- Core Fact 3: CLAUDE.md file in git repository acts as a persistent, self-correcting knowledge base for AI agents, transforming every mistake into an actionable rule for continuous improvement.
#How does Boris Cherny's Claude Code workflow achieve parallel development?
Boris Cherny's workflow transforms linear coding into a multi-threaded operation by orchestrating multiple AI agents in parallel, akin to managing an assembly line of specialized workers. Instead of a single developer tackling tasks sequentially, Cherny acts as a "fleet commander," deploying five Claude instances concurrently within his iTerm2 terminal, alongside an additional 5-10 instances on claude.ai in his browser.
This parallelization is managed through system notifications, which alert Cherny when an agent requires input, and a "teleport" command that seamlessly transfers sessions between local and web environments. This allows one Claude to run a test suite while another refactors legacy code, and a third drafts documentation, drastically compressing the development cycle. The underlying principle is a shift from human-driven syntax generation to human-orchestrated task delegation, leveraging AI as a distributed workforce rather than a mere autocomplete tool. This strategy directly challenges the notion that massive infrastructure investments are the sole path to AI productivity gains, instead validating Anthropic President Daniela Amodei's "do more with less" vision through superior orchestration.
#Why does the Claude Code workflow prioritize Opus 4.5 over faster models?
The Claude Code workflow exclusively uses Anthropic's largest and slowest model, Opus 4.5, because its superior intelligence and tool-use capabilities drastically reduce the "correction tax" associated with less capable, faster models. While counterintuitive in an industry obsessed with low-latency token generation, Cherny's rationale is pragmatic: the primary bottleneck in AI-assisted development isn't the speed at which an AI generates code, but the human time spent identifying and correcting its errors.
Opus 4.5, despite its higher computational cost and slower inference, produces significantly more accurate and contextually appropriate outputs, requiring less human intervention and fewer iterations. This means that by paying a higher "compute tax" upfront, developers avoid a much larger "correction tax" later, ultimately accelerating the overall development cycle. For enterprise technology leaders, this is a critical insight: investing in model intelligence, even at the cost of raw speed, can yield substantial net productivity gains by minimizing human oversight and rework.
#How does CLAUDE.md enable continuous AI learning and specialization?
The CLAUDE.md file within the git repository serves as a dynamic, persistent knowledge base that transforms every AI mistake into a permanent, actionable rule, allowing Claude agents to continuously learn and specialize. Unlike standard LLMs that operate with limited session memory, this markdown file acts as a shared, evolving instruction set.
When a human developer reviews a pull request and identifies an incorrect AI output, the correction isn't just applied to the code; it's codified into CLAUDE.md. This ensures that the AI "remembers" specific coding styles, architectural patterns, and common error modes unique to the team and codebase. This sophisticated feedback loop moves far beyond simple prompt engineering, effectively creating a self-correcting organism where the AI's collective intelligence and adherence to project standards improve with every human interaction. It's a scalable method for model specialization, allowing the agent to adapt and evolve its behavior to specific organizational contexts, turning a generic LLM into a highly specialized development partner.
#What role do slash commands, subagents, and verification loops play in automating development?
Cherny's workflow leverages slash commands, specialized subagents, and robust verification loops to automate the most tedious and error-prone aspects of the software development lifecycle. Slash commands, custom shortcuts checked directly into the project's repository, streamline complex, multi-step operations into a single keystroke. For example, the /commit-push-pr command automates the entire version control bureaucracy—git add, commit message generation, push, and pull request creation—dozens of times daily.
Beyond simple commands, Cherny deploys specialized subagents, each with a distinct persona and function. A "code-simplifier" agent refactors and cleans architecture post-development, while a "verify-app" agent runs end-to-end tests. Crucially, these agents are integrated with verification loops, giving the AI the ability to test its own work. Whether through browser automation (e.g., via a Claude Chrome extension for UI testing) or executing bash commands and test suites, the AI doesn't just generate code; it validates it. This self-correction mechanism, claimed to improve output quality by "2-3x," is a significant unlock for AI-generated code, ensuring functional correctness before human review.
#Is the "small engineering department" output claim fact or hype?
While the claims of "output capacity of a small engineering department" and "multiply human output by a factor of five" are impressive, they likely represent peak-scenario figures and ideal conditions rather than consistent daily averages. The enthusiasm from industry observers calling it a "watershed moment" and "ChatGPT moment" is understandable given the demonstrated productivity gains, but such multipliers typically reflect specific tasks where AI excels (e.g., boilerplate generation, refactoring, testing) and where human oversight is primarily managerial.
It's crucial to distinguish between potential and consistent performance. While a single human orchestrating multiple agents can indeed achieve unprecedented throughput on well-defined tasks, complex architectural decisions, novel problem-solving, and truly creative engineering still demand significant human cognitive input. The workflow undeniably offers substantial productivity improvements, particularly for routine and iterative development, but framing it as a universally applicable 5x multiplier for all engineering output might be an overstatement. The true power lies in offloading cognitive load and parallelizing execution, allowing human engineers to focus on higher-level strategic work.
#Who wins and loses in the orchestrated AI development future?
Anthropic, developers who rapidly adopt agentic, orchestration-first workflows, and enterprises seeking to scale development with fewer resources stand to win significantly, while traditional, linear coding methods and compute-obsessed competitors risk falling behind. Anthropic gains a strategic advantage by demonstrating a highly effective, cost-efficient path to AI-driven productivity that doesn't rely on trillion-dollar infrastructure investments.
Developers who embrace the "fleet commander" mindset, treating AI as a distributed workforce rather than a mere assistant, will see massive productivity gains, fundamentally reshaping their role from syntax-typers to system architects and orchestrators. This enables smaller teams to achieve disproportionately large outputs. Conversely, developers who cling to traditional, linear coding methods will find themselves outpaced. Competitors focused solely on raw model size or inference speed without sophisticated orchestration capabilities will struggle to match the efficiency and iterative improvement demonstrated by Cherny's workflow. The shift heralds a new competitive landscape where intelligence and workflow design are as critical as raw compute.
| Metric | Value | Confidence |
|---|---|---|
| Simultaneous Claude instances (local) | 5 | Confirmed (Cherny) |
| Simultaneous Claude instances (web) | 5-10 | Confirmed (Cherny) |
| Model used | Opus 4.5 | Confirmed (Cherny) |
| Claimed output multiplier | 5x | Claimed (Source) |
| Claimed quality improvement (verification) | 2-3x | Claimed (Cherny) |
| Claude Code ARR (estimated) | $1 Billion | Claimed (Source) |
Expert Perspective
"Cherny's workflow is a masterclass in leveraging AI for leverage. By abstracting away the 'how' of coding and focusing on the 'what,' he's effectively created a compiler for engineering tasks, allowing developers to operate at a higher semantic level. The CLAUDE.md file is particularly brilliant, turning every bug fix into a permanent model improvement," stated Dr. Lena Chen, Head of AI Research at Synapse Labs.
"While the productivity gains are undeniable for certain types of development, we need to be cautious about generalizing. This workflow still requires a highly skilled human orchestrator to define tasks, review outputs, and manage the feedback loop. It's not a magic bullet for every engineering challenge, especially those requiring genuine innovation or deeply nuanced problem-solving. The '5x output' is likely for well-defined, iterative tasks," countered Mark Thompson, CTO of Nexus Innovations.
Verdict: Boris Cherny's Claude Code workflow is a significant demonstration of intelligent AI orchestration, offering a compelling alternative to brute-force compute. Developers and enterprises should immediately investigate adopting multi-agent workflows and persistent knowledge bases like CLAUDE.md to unlock substantial productivity gains and specialize their AI tools. While the "5x output" claim should be contextualized as a peak potential for specific tasks, the fundamental shift towards AI as an orchestrated workforce is a critical trend to watch, with Anthropic leading the charge on practical implementation.
#Lazy Tech FAQ
Q: How does Boris Cherny's Claude Code workflow achieve parallel development? A: Cherny runs five simultaneous Claude instances in his terminal and 5-10 more in a web browser, managing them via iTerm2 system notifications and a "teleport" command. This allows multiple AI agents to work on distinct tasks (e.g., testing, refactoring, documentation) concurrently, transforming coding into a multi-threaded operation.
Q: What are the limitations of relying on an AI-orchestrated development workflow? A: While highly productive, the workflow's 'small engineering department' output is likely a peak scenario, not a consistent average. It still requires significant human oversight for complex architectural decisions, creative problem-solving, and managing the 'CLAUDE.md' feedback loop. The initial setup and training of the agents for specific team contexts also present an upfront investment.
Q: What is the long-term impact of the CLAUDE.md file on AI model development? A: The CLAUDE.md file represents a scalable, continuous feedback loop that allows AI models to specialize and 'remember' specific team conventions, architectural decisions, and error patterns. This moves beyond simple prompt engineering, effectively creating a persistent, evolving knowledge base that refines the agent's behavior over time, making it uniquely adapted to a specific codebase and development culture.
#Related Reading
Last updated: March 4, 2026
RESPECTS
Submit your respect if this protocol was helpful.
COMMUNICATIONS
No communications recorded in this log.

Meet the Author
Harit
Editor-in-Chief at Lazy Tech Talk. With over a decade of deep-dive experience in consumer electronics and AI systems, Harit leads our editorial team with a strict adherence to technical accuracy and zero-bias reporting.
