When the creator of one of the most advanced AI coding agents shares how he works, the developer ecosystem pays close attention. Over the past week, a detailed thread published on X by Boris Cherny, the creator and head of Claude Code at Anthropic, has sparked intense discussion across engineering circles. What started as a casual look into his terminal setup quickly turned into a widely discussed example of how AI driven workflows are redefining how software is built, reviewed, and shipped at scale.
Cherny’s post outlined a workflow that is notably minimal in tooling but heavy on orchestration. Instead of working through code line by line, he operates multiple AI agents in parallel, effectively managing them like coordinated workers. He revealed that he typically runs five Claude agents simultaneously within his terminal, assigning each a dedicated task such as running test suites, refactoring older modules, or preparing documentation. System notifications alert him whenever an agent needs guidance, allowing him to shift attention without breaking focus. Alongside this, he also runs several Claude sessions in a browser environment and seamlessly moves tasks between web and local setups. The result is a single developer producing output comparable to a small engineering team, an approach that aligns with Anthropic’s broader focus on efficiency through orchestration rather than sheer infrastructure scale. More context on Cherny’s work can be found on his X profile at https://x.com/bcherny.
Another detail that drew attention was Cherny’s preference for Anthropic’s largest and slowest model, Opus 4.5. While much of the industry prioritizes faster response times, Cherny explained that the smarter model ultimately saves time by requiring fewer corrections. In his experience, the real bottleneck in AI assisted development is not token generation speed but the human effort spent fixing subtle mistakes. By using a model that reasons more deeply and handles tools more reliably, he reduces the need for repeated steering and rework. This perspective resonated with enterprise technology leaders who see rising value in accuracy and reliability over raw speed, especially as AI systems become more embedded in production environments.
Cherny also addressed one of the most common limitations of large language models: lack of long term memory. His team solves this by maintaining a single shared instruction file named CLAUDE.md within their code repository. Whenever the AI makes an incorrect assumption or violates a project specific rule, the correction is added to this file. Over time, this creates a living rulebook that continuously improves the agent’s behavior. Instead of fixing the same mistake repeatedly, every error becomes a documented instruction, making the system progressively more aligned with the team’s standards and architecture. Developers observing this approach noted that it effectively turns the codebase into a self improving system where human feedback directly shapes future outputs.
Automation plays a central role in reducing repetitive work across the development lifecycle. Cherny relies on custom slash commands stored in the repository to handle tasks that traditionally consume developer time, such as committing code, pushing changes, and opening pull requests. These commands allow the AI to manage version control processes autonomously. He also uses specialized subagents focused on tasks like simplifying code structure or verifying application behavior before release. One of the most discussed aspects of his workflow is the verification loop, where the AI tests every change by running commands, executing test suites, and even interacting with the user interface through a browser. This self testing capability significantly improves output quality and reduces the risk of broken releases.
The reaction to Cherny’s shared workflow highlights a broader shift in how developers view AI. Rather than treating it as an advanced autocomplete tool, many now see it as a coordinated system capable of handling large portions of engineering work. The tools and techniques described are already available, but they require a different mindset that focuses on delegation, verification, and continuous feedback. As developers continue to experiment with these approaches, the discussion sparked by Cherny’s post signals a growing interest in AI not just as an assistant, but as an integrated layer of modern software production.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.