top of page
Search

Your next hire isn't human. But you still need someone to manage them.



AI & the future of dev teams


AI hasn't replaced the developer. It's become their coworker. The question every CTO needs to answer is whether they're actively managing that relationship or just hoping for the best.


Between February 1 and March 24 this year, Anthropic shipped over 50 product releases. Voice mode. Auto-memory. Claude for Excel. Computer use. Scheduled tasks. Multiple model upgrades. A new context window. The list goes on. Most people caught five of them.

What makes that number remarkable isn't just the pace, it's what it signals about how Anthropic itself now works. The team building Claude is being transformed by Claude.


Engineers who used to write code are now orchestrating it. Claude is, in a very real sense, writing the next version of Claude. And that shift, from writer to conductor, is the same shift playing out inside every serious development team right now, whether they've named it yet or not.


What the numbers actually look like on the ground


I work with senior developer teams in Serbia - people I've placed with UK and US companies for years, some for close to a decade. One of my lead developers runs a team that has been operating fully AI-native workflows in production for months. These aren't pilot numbers. They're delivery numbers.


76

tickets in 4 months, single lead engineer


<2%

revert rate on production code


team average velocity, in an already AI-augmented team


The pipeline is not complicated. Claude generates production code. GPT provides a second-opinion review layer. The senior engineer makes the architectural decisions, maintains quality standards, and orchestrates the whole thing. Three entities working in sequence, two of them AI, one of them irreplaceably human.


That last part is the point. The velocity is real, but it runs on human judgment. The senior engineer isn't being replaced, they've been promoted. They're leading a team that never stops, never gets tired, and doesn't need a standup. What they do need is someone capable of knowing when the AI is confidently wrong. That skill is not optional. It's the engine the whole operation runs on.


"The senior engineer isn't being replaced. They've been promoted. They're leading a team that never stops."


The fear is real but the pattern is old


The anxiety is understandable. On X right now, conversations about AI and jobs run hot. Developers worried about their value. The word "replacement" appearing constantly. But before we accept that framing, it's worth looking at what technology has actually done to work historically, because the pattern is not what the fear suggests.


I came into tech in the early 2000s as a business analyst. That role didn't exist before. It emerged because systems became complex enough to need someone fluent in both business language and technical language, someone who could stand in the gap between the people building and the people deciding. Technology didn't just displace jobs. It created an entirely new category of human work that nobody had named yet.


We're at the same inflection point. AI is not eliminating the need for human judgment. It's making the gap between strong human judgment and absent human judgment vastly more consequential. The teams that will struggle are not the ones with too many developers, they're the ones where nobody is managing the AI coworker, nobody is checking the output, and nobody has built a framework for when to trust the machine and when to override it.


The junior developer question no one is asking


Which brings us to the issue that should genuinely concern CTOs building teams today. The next generation of developers is entering the field already running AI. They're not learning to code and then adding AI on top they're starting with AI and working backwards. They can ship fast. They look productive. And when something breaks in production at 2am, in a system they've never fully understood from first principles, the question of what happens next is not yet answered.


This isn't an argument against junior developers who use AI. It's an argument for the senior engineers around them, the ones who can read AI-generated code and spot the elegant-looking mistake, who have intuition for where the model will hallucinate confidence, who know when the AI's pattern-matching is drawing from a training corpus that doesn't include your specific edge case. That human layer is not a nice-to-have. It's what determines whether your velocity is an asset or a liability waiting to surface.


The roles that don't have names yet


Every technology wave creates new categories of work. "Business analyst" didn't exist in 1995. "DevOps engineer" didn't exist in 2005. The following roles are forming right now, some have early names, most are still being defined on the job.


Emerging roles in AI-native teams


FormingAI workflow architect - designs how humans and AI divide work inside a delivery team. Owns the pipeline, not just the output.


EarlyHuman-AI team lead - manages a blended team where some members are agents. An entirely new management discipline with no playbook yet.


FormingAI output auditor - QA specialist for AI-generated code and content at scale. The person whose job is catching what the model doesn't know it's getting wrong.


EmergingTrust layer specialist - ensures AI adoption in an organisation has the human scaffolding to make it stick. The difference between a working rollout and an expensive ghost town.


EarlyAI integration lead - bridges the model and the organisation. Part translator, part change manager, part systems thinker.


What this means if you're building a team right now


The question is no longer how much AI to use. That decision has been made for you by the market. The question is whether your human layer is strong enough to lead it, and whether the people making your architectural decisions understand that leading AI is now a core part of their job description.


Anthropic shipped 50 products in 52 days. The people doing that work are not writing code the way they did two years ago. But someone is still deciding what to build, what to ship, and what matters. Someone is still reading the output and knowing when it's wrong. That person, the one who can work fluently with AI and still exercise independent judgment, has never been more valuable, or harder to find.


The teams winning right now are not the ones who've replaced humans with AI. They're the ones who have senior engineers who know how to orchestrate AI with precision, and the wisdom to know that the human layer isn't a legacy cost. It's the competitive advantage.


Gail Weiner is founder of Simpatico Studios and a Trust Architect specialising in the human layer of AI adoption. She works with mid-market organisations navigating AI rollout, and has spent a decade placing senior European development teams with UK and US tech companies, including AI-native Serbian teams delivering measurable velocity gains in production.


If you're building an AI-native team, managing an AI rollout, or want to talk about what the human layer actually looks like in practice - connect on LinkedIn or reach out via gailweiner.com.

 
 
 

Comments


bottom of page