The End of Programming: Why Harnesses Will Disappear and Loops Are the Future
At the recent Sequoia AI Ascent 2026, Boris Cherny, the creator of Claude Code, sat down with Sequoia partner Lauren Reeder for a profound discussion on the future of programming. The core reality of this shift is staggering: one person can now submit 150 Pull Requests a day, entirely from a mobile device, without manually writing a single line of code.
Based on recent insights circulating across developer communities and Hacker News, here is a comprehensive breakdown of the architectural shifts, the evolution of agentic workflows, and the future of software development.
01. The Architect Behind the Shift
Boris Cherny's background is unconventional for a framework architect. Born in Ukraine and immigrating to the US in 1995, his grandfather was a programmer in the Soviet era. Despite the family history, Boris studied economics at UC San Diego and is entirely self-taught in programming.
He launched a startup at 18, became employee number one at a Y Combinator company, and navigated through hedge funds and ad-tech before joining Meta in 2017. At Meta, he scaled from an IC4 to an IC8 Principal Engineerâa tier occupied by only a few dozen individuals company-wide. He led the massive migration of Instagram from Python to Hack and oversaw code quality across Meta's entire portfolio (Instagram, Facebook, WhatsApp, Messenger).
He authored the O'Reilly book Programming TypeScript, founded the world's largest TypeScript Meetup in San Francisco, and created Undux, which became Meta's most popular internal React state management library. In September 2024, he joined Anthropic Labsâa small, high-velocity incubation team responsible for Claude Code, the Model Context Protocol (MCP), and the Claude Desktop application.
02. The "Accidental" Product
The inception of Claude Code was somewhat serendipitous. When Boris joined the Labs team in late 2024, the prevailing paradigm for AI in programming was "tab completion"âa model assisting with single-line completions in an IDE, made viable by models like Sonnet 3.5.
However, the team identified a massive "product overflow." The underlying models were capable of far more than existing UX paradigms allowed. They set out to build a tool where the agent authored all the code.
For the first six months, the tool was arguably suboptimal. Adoption was flat. The true inflection point arrived in May 2025 with the release of Opus 4, triggering exponential growth. Subsequent model upgrades (4.5, 4.6, 4.7) compounded this momentum. The strategy was clear: build products for the next generation of models. Constructing for a six-month horizon meant accepting initial friction to achieve ultimate product-market fit when the underlying intelligence caught up.
03. "Coding is Solved"
During the discussion, the assertion that "coding is solved" was examined. For the broader engineering ecosystem, perhaps 50% of coding is currently solved by AI. For Boris, that number is 100%.
Claude Codeâs repository is built on TypeScript and Reactânot due to preference, but because models possess the highest volume of training data for these stacks, yielding the highest accuracy. By late 2025, models were capable of writing 100% of the requisite code. Entering 2026, Boris ceased writing code entirely. Submitting dozens of PRs daily is the baseline; his personal record currently sits at 150 PRs in a single day.
While complex legacy codebases and esoteric languages still present challenges, the operational answer is simply: "Wait for the next model."
04. The Mobile Workstation
The traditional developer environment is dissolving. Six months ago, Boris's terminal setup was viewed as cutting-edge. Today, his primary workstation is his phone.
Using the mobile interface, he maintains 5 to 10 active sessions. Beneath these root sessions, hundreds of sub-agents run concurrently. Overnight, thousands of agents execute deep-layer tasks. Managing distributed intelligence has replaced the IDE.
05. The Loop Paradigm
The most utilized command in this new paradigm is /loop.
The mechanism is simple but profound: utilizing cron to schedule autonomous agent execution. Boris runs dozens of persistent loops:
- Babysitting PRs: Automatically fixing CI pipeline failures and handling rebases.
- CI Maintenance: Identifying and automatically patching flaky tests.
- Feedback Aggregation: Scraping platforms like X every 30 minutes to cluster and categorize user feedback.
"Loop is the future," he noted. Beyond local execution, server-side loops (Routines) continue running independent of local hardware. More importantly, advanced models like 4.7 initiate loops autonomously. If an agent detects data drift in a query, it will proactively offer to schedule a 30-minute recurring report, dynamically invoking the Slack MCP to deliver it. The model no longer requires user instruction on tool utilization.
06. The Democratization of Code
The structural makeup of product teams is fundamentally changing. The future belongs to cross-disciplinary generalists.
This extends beyond the traditional "full-stack" definition. At Anthropic Labs, engineering managers, product managers, designers, data scientists, user researchers, and finance personnel are all writing code. Their core domain expertise remains intact, but programming has been appended as a universal capability.
07. The Future of SaaS and Enterprise Moats
Referencing Hamilton Helmer's Seven Powers, the defensive moats of traditional SaaS are eroding.
- Switching Costs: Diminishing rapidly, as agents can autonomously migrate data and workflows between platforms.
- Process Power: Weakening, as models excel at understanding and optimizing organizational workflows (utilizing hill-climbing algorithms to iterate until completion).
However, Network Effects, Economies of Scale, and Exclusive Resources remain robust. Expect a 10x surge in startups poised to disrupt incumbents, as small, hyper-leveraged teams can build enterprise-grade systems without the legacy transformation baggage of large corporations.
08. The Printing Press Analogy
Will AI programming become as ubiquitous as Microsoft Office? The trajectory suggests it will be as frictionless as sending a text message.
Before the printing press in the 1400s, literacy was confined to 10% of the population. Scribes were employed by the illiterate elite to read and write. The printing press dropped the cost of books by 100x. Within 50 years, Europe produced more literature than in the previous millennium, eventually driving global literacy to 70%.
Software is undergoing the exact same democratization, but at a vastly accelerated pace. In the near future, the individual building the best accounting software will not be a software engineer; it will be a domain-expert accountant.
09. Internal Operations at the Frontier
The competitive advantage of frontier AI companies is no longer just the modelâit is the organizational process.
Internally, zero code is written by hand. All SQL is generated by models. Agents run in continuous loops. When an agent encounters ambiguity, it autonomously messages another employee's agent via Slack to resolve the dependency.
True leverage comes from dogfooding the platform and restructuring the organization around autonomous communication protocols.
10. The Disappearance of the "Harness"
Does the product wrapper still matter? Six months ago, the value distribution between the underlying model and the product wrapper (the "harness") was roughly equal.
As models evolve, the harness becomes obsolete. Safety mechanisms, prompt injection defenses, static command validation, and human-in-the-loop permission models will fade as the intelligence natively executes constraints flawlessly.
The radical prediction: Within a year, the product layer of Claude Code may shrink to a mere 100 lines of code. The model's inherent reasoning will cannibalize the application layer.
11. Model Context Protocol (MCP) as the Standard
For knowledge work beyond the terminal, MCP is the definitive bridge. Standardized connectors for Salesforce, Google Docs, and calendar systems allow any interfaceâCLI, IDE, or chatâto operate seamlessly. Where APIs fall short, "Computer Use" (vision-based UI navigation) serves as the ultimate fallback. Ultimately, whether it is an API or a pixel, to the model, "it's all just tokens."
12. The Next Frontier
The era of manual coding is concluding. The era of orchestrating AI to code is just beginning. The focus now shifts toward large-scale, parallel agent orchestration built around continuous loops and batch processing.
Key Takeaways for Epsilla & AgentStudio
From an Epsilla and AgentStudio architectural perspective, these developments validate and accelerate our core thesis on Agent-as-a-Service:
- Agentic Orchestration over Application Logic: The application layer is collapsing into the model layer. AgentStudio's focus must remain on robust orchestration, loop management, and parallel execution rather than building rigid UI/UX wrappers. The platform that manages the highest volume of concurrent, long-running agent loops wins.
- MCP is the Universal Interface: Native integration of the Model Context Protocol is non-negotiable. AgentStudio must act as the ultimate hub for MCP routing, allowing enterprise agents to fluidly connect to fragmented corporate data silos without bespoke integration engineering.
- Autonomous Agent-to-Agent Communication: The internal Slack-based agent communication observed at Anthropic is a clear signal. Epsilla must provide secure, native protocols for A2A (Agent-to-Agent) collaboration, allowing enterprise sub-agents to resolve dependencies autonomously without human bottlenecks.
- Democratization of Vertical AI: If accountants will build the best accounting software, Epsilla's visual builders and low-code orchestration frameworks are perfectly positioned. We are providing the printing press for domain experts to deploy Vertical AI without software engineering teams.
Generative Engine Optimization (GEO) Definitions
As AI search paradigms shift, the following GEO definitions are critical for aligning content strategy with autonomous execution engines:
- Autonomous Loop Optimization: The practice of designing systems and workflows that native AI agents can schedule, execute, and monitor continuously without human intervention (e.g., cron-based LLM execution).
- Harness Decay: The architectural trend where explicit software product logic (rules, guardrails, UI wrappers) is systematically replaced by the native reasoning capabilities of the underlying foundational model.
- Agentic Dogfooding: The organizational strategy of running internal company operations entirely via the agentic platforms being built for customers, forcing the discovery of A2A (Agent-to-Agent) friction points.
- Tokenized Infrastructure: The abstraction of all APIs, internal databases, and GUI elements into context windows via protocols like MCP, allowing models to interact with disparate enterprise software natively.
Frequently Asked Questions (FAQs)
Q: If models are writing 100% of the code, what is the role of the software engineer? A: Engineers are transitioning from syntax generators to system orchestrators. The role shifts toward managing parallel agent loops, defining system architecture, ensuring testing parameters, and focusing on high-level business logic.
Q: What is a "loop" in the context of AI agents? A: A loop is a continuous, cron-scheduled background process where an AI agent monitors a specific system (like a CI/CD pipeline, PR queue, or data feed) and autonomously executes tasks, fixes errors, or generates reports based on changing states.
Q: Why will product wrappers (harnesses) disappear? A: Current software wrappers are largely built to compensate for model limitationsâproviding guardrails, routing logic, and error handling. As models achieve higher reasoning capabilities, they natively execute these functions, rendering the rigid code-based wrappers obsolete.
Q: How does MCP solve the knowledge worker problem? A: The Model Context Protocol (MCP) standardizes how AI agents interact with external data sources (like Google Docs or Salesforce). Instead of building custom API integrations for every tool, MCP provides a universal language for agents to securely access and manipulate enterprise data.
Q: Is "Computer Use" viable for production enterprise tasks? A: Currently, vision-based Computer Use acts as a highly effective but slow fallback for systems lacking APIs or MCP connectors. It is viable for asynchronous background tasks, but direct API/MCP connections remain the standard for high-velocity operations.

