RC Rafael de Carvalho
/ Software Engineering / Platform Engineering / Agentic Platforms / Kubernetes / Cloud Native

Writing Code Was Never the Bottleneck

Agentic development can deliver code faster than before, but it does not remove the need for process, context, expectations, and engineering judgment.

Writing code was never the bottleneck. It is the visible part of software delivery, and often the easiest part to count, estimate, celebrate, and blame. But most teams do not struggle because engineers cannot type fast enough.

They struggle because the process around the code is unclear. Scope moves. Context is scattered. Expectations are implicit. Timing is misread. Feedback arrives too late. The path from idea to production crosses too many handoffs, hidden dependencies, and ownership gaps.

Agentic development changes the speed of implementation. Coding agents can generate code, inspect repositories, propose changes, write tests, and automate parts of the delivery flow faster than we could before. That matters. In one controlled GitHub Copilot study, developers completed a programming task 55.8% faster with an AI pair programmer than without one. That kind of improvement is real, and it will change how engineering teams work.

But faster delivery of code does not automatically mean better delivery of value.

If a team already had trouble turning requirements into clear scope, an agent will not fix that by producing code earlier. If context was missing before, the agent will work with the same missing context. If expectations were unclear before, faster pull requests may only make the mismatch visible sooner. If the delivery process was weak before, agentic development may accelerate the weak process instead of improving it.

flowchart LR
  idea[Idea] --> scope[Scope]
  scope --> context[Context]
  context --> agent[Coding agent]
  agent --> code[Code faster]
  code --> review[Review and validation]
  review --> release[Release]
  release --> value[Customer value]

  scope -. unclear .-> rework[Rework]
  context -. missing .-> defects[Wrong solution]
  review -. weak .-> risk[Production risk]
  rework --> delay[Delayed value]
  defects --> delay
  risk --> delay

That is why I see process as the key. Not process as ceremony, but process as shared clarity: what problem are we solving, what trade-offs are acceptable, what context matters, what standard is expected, how do we know the change works, and when is the work actually done?

Coding agents will do more of the implementation work. I believe that is the direction of travel. But like every major shift in technology, they will introduce new challenges, new requirements, and new standards. Teams will need better ways to provide context, constrain scope, review generated changes, validate behavior, trace decisions, and connect implementation back to production outcomes.

The research points in the same direction. The SPACE framework argues that developer productivity cannot be reduced to one metric, such as activity or output. DORA’s work on software delivery performance points to capabilities around delivery, reliability, cloud practices, observability, and organizational performance. METR’s research on long-horizon tasks shows agent capability is improving quickly, but also that longer, messier work remains harder to complete reliably. Their later productivity experiment update is a useful reminder that the impact of AI tools depends heavily on task shape, developer behavior, and workflow design.

In other words: code generation is only one part of productivity.

flowchart TB
  agent[Agentic development] --> speed[More implementation speed]
  speed --> pressure[More change entering the system]

  pressure --> context[Higher context demands]
  pressure --> review[Higher review demands]
  pressure --> standards[Higher engineering standards]
  pressure --> observability[Higher need for feedback signals]

  context --> outcome[Better delivery process]
  review --> outcome
  standards --> outcome
  observability --> outcome

  outcome --> value[Useful, safe, maintainable value]

That is where engineering still matters. Architecture still matters. Product thinking still matters. Testing, observability, security, reliability, and operational judgment still matter. The agent can help produce the change, but engineering defines whether the change is useful, safe, maintainable, and aligned with the system around it.

This is also where platform engineering becomes more important, not less. A good platform gives both humans and agents a clearer path through the system. It makes the common path obvious, the safe path easy, and the exceptional path understandable. It encodes standards, feedback loops, deployment patterns, runtime signals, and guardrails so that speed has somewhere productive to go.

NIST’s generative AI risk guidance is useful here because it frames AI adoption as something that needs governance, measurement, and management, not just enthusiasm. For engineering teams, that means defining how agents are allowed to operate, what context they should receive, what changes require human review, how generated code is tested, and how production feedback reaches the next iteration.

Agentic development is not a silver bullet. It will not replace the need to understand the problem, shape the work, make trade-offs, and own the outcome.

It is not here simply to take your place. It is here to push you forward.

References and supporting material