Jaypore Labs
Back to journal
Leadership

AI team scaling: 1, 3, and 10 engineers

Team shapes for AI products change at predictable scale thresholds. Knowing what changes when avoids the worst hiring mistakes.

Yash ShahJanuary 12, 20265 min read

A common conversation with founders who've shipped one AI feature: "We need a head of AI." Sometimes they do. Usually they need different things at different scales.

Here's what changes at 1, 3, and 10 engineers on AI work.

1 engineer

The team is one full-stack engineer who handles the AI feature alongside everything else.

What works:

  • One engineer owns the prompt, the evals, the integration, and the deployment.
  • The PM writes evals; the engineer builds against them.
  • Tooling is light: a prompts folder, a basic eval script, a dashboard.

What doesn't:

  • The engineer can't both ship features and own AI ops. Something gets neglected.
  • Eval coverage tends to be thin. The bug surfaces in production.
  • The engineer becomes a single point of failure for AI knowledge.

Hiring at this stage:

Hire a generalist who's done AI work, not an AI specialist. The specialist will be bored; the generalist will compound.

3 engineers

The team has a lead, a junior, and a half-time platform engineer for infrastructure.

What works:

  • One person owns evals. Eval discipline emerges.
  • One person owns deployment, monitoring, cost.
  • The third does feature work.
  • Prompt reviews, eval reviews, and deploys become processes.

What doesn't:

  • No specialization yet. Everyone touches everything; some inconsistency.
  • Hiring at this stage is hard because the role is undifferentiated.
  • The 3-engineer team scales up to about 4-5 features before strain shows.

Hiring at this stage:

The first hire after the founding engineer should be a platform/infra-minded engineer who can build the cost attribution, the deploy pipeline, and the eval CI. Not another feature-builder.

10 engineers

The team has split into 2-3 sub-teams with explicit ownership.

What works:

  • One sub-team owns the eval and ops platform (eval CI, deployment, cost attribution, model routing).
  • One or two sub-teams own product surfaces (one per feature area).
  • A clear PM-engineer-eval workflow exists per feature.
  • Cross-cutting decisions (model choice, prompt patterns, observability) have an explicit owner.

What doesn't:

  • The "platform vs. product" tension is real. Engineers can't decide which team to join.
  • Without a strong tech lead or director, decisions drift. Pace slows.
  • Knowledge transfer is hard. New engineers spin up slowly.

Hiring at this stage:

You actually need an AI director or senior tech lead with prior production experience. Now is when "head of AI" makes sense. Earlier, the role is too small.

What stays constant across scales

  • Evals are everyone's responsibility. Doesn't matter what role.
  • Prompt review is engineer-level work. A prompt change should get a code review like any other change.
  • Cost is a real budget line. Treated like AWS spend.
  • Model upgrades are scheduled events. Not background drift.
  • One model per use case at a time. No A/B testing two models without a flag.

What changes at each scale

Concern1 eng3 eng10 eng
Eval ownershipengineerdedicated persondedicated sub-team
Cost trackingspreadsheetdashboardattribution system
Deploymentsmanualflag-gatedcanary + auto-rollback
Prompt versioninggit folderstructured librarytyped prompt SDK
Model upgradesad-hocquarterly processquarterly with eval CI gate

The patterns aren't arbitrary. Each scale forces the next discipline.

The mistake to avoid

The biggest mistake is hiring for the scale you'll be at in 6 months instead of the scale you are now. The senior AI director who lands at a 3-engineer team has nothing to direct. The generalist hire who lands at a 10-engineer team is buried in process they can't influence.

Hire for the team shape now and one scale up. Beyond that, you'll learn what you actually need.

The org-chart question

Should the AI engineering team report to the CTO, the CPO, or its own VP? Three patterns:

  • CTO. Works when AI is foundational to the product. Most common at scale.
  • CPO. Works when AI is a feature, not the product. Common at < 10 engineers.
  • Own VP. Works when AI represents > 30% of the company's engineering work.

There's no right answer. There are answers that fit the company's reality.

Close

AI team scaling is more predictable than founders fear. The transitions happen at 1→3 and 3→10. Each transition demands one specific new discipline. Build the discipline as you cross the threshold and you avoid the chaos of trying to do it later under pressure.

Related reading


We advise engineering leaders on AI team structure and hiring. Get in touch.

Tagged
Engineering ManagementTeam ScalingAI EngineeringHiringLeadership
Share