Cédric Rittié

← Back to blog
12 min//

Claude Code sub-agents: deliver a discovery in 30 minutes

A Product Manager's full walkthrough: how to brief three Claude Code sub-agents in parallel (customer voice, product data, competitors) to turn a morning of research into forty minutes of synthesis.

claude codesub-agentsagentsanthropicproduct managementdiscoveryproductivityAI agentsdelegation
Phase 3 · Automate · Article 3 of 4

The impossible Monday morning brief

Monday, 9am. The founder pings you: "Customers keep bringing up onboarding. Can we tackle it? I need a brief by Wednesday."

You know that "customers keep bringing it up" means everything and nothing. Before scoping a solution, you have to scope the problem. That's the Product Manager's job: not delivering a roadmap, understanding what we're actually trying to solve.

Before sub-agents, your day looked like this:

  • 9:30am. You open Confluence, dig through the product space for old specs and user research notes.
  • 10:30am. You jump to Jira, tour the customer tickets on the topic, copy the most telling ones into a doc.
  • 11:30am. You scan the Teams notes from sales calls over the last two months.
  • 2pm. You run a query in product data, export to CSV. You roll into the competitive scan.
  • Tomorrow morning, you write the brief.

A day and a half. You've collected more than you've thought.

With Claude Code sub-agents, the same day fits into forty minutes. You brief three agents in parallel, each on one source. You head into your 1:1, a customer call, lunch. You come back at 2pm, three sourced summaries are sitting in front of you. No more searching. You read, you cross-reference, you decide.

Before
8 hours of collecting
After
40 min
Three agents, three contexts, one synthesis
Diagram: the main conversation briefs 3 sub-agents in parallel. The first digs into Confluence, Jira and calls. The second analyzes product data. The third looks at competitors. Each works in its own isolated context. At the end, 3 summaries arrive: customer verbatims, product data, market state. The brief gets written in 40 minutes.

That's what we're going to walk through, step by step.

This article builds on MCP servers and The quota wall. The fourth move of context management was isolate. We formalize it here.

What a Claude Code sub-agent actually is

A sub-agent is a fresh Claude Code instance you launch from your main conversation. It knows nothing of what you've discussed before. It receives a brief, does its job in its own context, hands you back a summary, disappears. It's the equivalent of the Task tool in Anthropic's docs.

Three properties to know:

  1. It has its own context. Everything it reads (50 files, 30 Jira tickets, 15 web pages) stays with it. Your main conversation only sees the final summary.
  2. It can run in parallel. You launch three sub-agents in the same message, they work simultaneously. Wall time gets divided.
  3. You choose its tools. A sub-agent that should read files doesn't need to write. A sub-agent doing web research only needs WebFetch. You restrict, which avoids surprises.

In Claude Code, you trigger a sub-agent by asking explicitly, in natural language:

Launch a sub-agent that [does X], with read-only access to [Y]. Bring me back a 10-line summary.
Sub-agent launched · isolated context · tools restricted to Read
Summary ready.

You can also type /agents to see the list of your custom agents and pick one. Or @agent-name mid-message to invoke a specific agent. Same logic as your Skills: Claude loads what it has on hand and triggers it when the context fits. More on that below.

The walkthrough: your three sub-agents on a discovery

Here's how the founder's discovery actually plays out, in forty minutes.

Sub-agent 1: customer voice

You start with what matters most: what are customers actually saying? You open Claude Code and type:

Launch a read-only sub-agent (Read, Grep). Mission: explore three sources from the last 6 months. The Confluence export in ./confluence/, the call notes in ./teams-notes/, and the Jira export ./jira-export.csv. Find anything that mentions onboarding.

Give me back: (1) the 3 most frequent recurring patterns, (2) two customer verbatims per pattern with the exact source, (3) a criticality estimate based on tone and frequency. Markdown format. No invention: if you don't find it, say so.
Sub-agent launched · 0 tokens in main conv · WebFetch disabled

Three minutes later, here's what comes back:

Pattern 1: onboarding too long · 47 mentions, 12 customers
"We dropped the onboarding after 3 days, the team disengaged." · Sales call Acme, 2026-03-15
"My sales reps only use 30% of the platform." · Jira ticket #4521
Criticality: high (mentioned in 4 out of 7 churns this quarter).

Pattern 2: missing auto-config · 28 mentions, 8 customers
"Every new admin has to redo the same 40 clicks." · User research note, 2026-02-20
"We want to be able to clone our setup." · Confluence/feedback-q1.md
Criticality: medium (recurring frustration, not blocking).

Pattern 3: reporting too technical · 15 mentions, 5 customers
Verbatims and sources provided.
Criticality: low (request, not complaint).

You haven't read a single ticket. You've got a clean, sourced summary, with verbatims and a criticality estimate you can directly challenge if it feels off.

Sub-agent 2: product data

While the first one runs, you launch the second right away. This time, the agent goes after data through the MCP you've connected to your analytics tool.

Launch another sub-agent in parallel, with access to the analytics MCP only. Mission: over the last 90 days, give me (1) the adoption rate of onboarding on active accounts, (2) drop-offs in the activation funnel, (3) the comparison between cohorts 0-6 months and > 12 months. Format: table + 3 major insights in prose. If a metric is missing, say so, don't make it up.
Sub-agent 2 in parallel with the previous · scope: analytics MCP

Five minutes:

Adoption: 38% of active accounts (internal benchmark 65% for core modules).
Activation funnel: 41% drop-off at step 3 (initial config).
Cohorts: < 6 months → 22% adoption · > 12 months → 51% adoption.

Insight 1: friction is concentrated in the activation phase, not in recurring usage. Accounts that pass step 3 stick around.
Insight 2: the gap between recent and older cohorts suggests an onboarding problem more than a product one.
Insight 3: the 38% who do adopt use it on average 4.2 times per week, which is solid. The problem is crossing the threshold, not perceived value.

This lines up with what sub-agent 1 said: customer voice talks about onboarding being too long, the data confirms a drop-off at activation step 3. You already have your problem statement without touching a single Excel sheet.

Sub-agent 3: competition

The third one tackles external scanning. There, you give it web access.

Third sub-agent in parallel, with WebFetch only. Mission: for the 4 main competitors I'll list (Acme, Bolt, Cetra, Drift), look at how they handle onboarding. For each: (1) functional approach, (2) pricing/positioning, (3) gaps perceived in public user feedback (G2, Reddit, their blog). Format: comparative table + a "differentiating opportunity" note at the bottom.
Sub-agent 3 · WebFetch only · no repo access

Seven minutes later:

Acme: all-in-one, premium tier ($199/seat). Onboarding guided over video by a CSM. Gap from feedback: too expensive for mid-market.
Bolt: generous free tier, monetization on integrations. Self-serve onboarding but lightly guided. Gap: manual initial config, like ours.
Cetra: pure self-serve focus, claims "first value in 5 minutes". Gap from feedback: limited feature set beyond onboarding.
Drift: industry-templated approach. Gap: no fine-tuning possible.

Differentiating opportunity: none of them offers auto-config based on team profile. Bolt and Drift partly tried, didn't go all the way. Could be a positioning angle.

You have the three pieces. Customer voice, product data, competition. You haven't opened a tab, read a PDF, exported a CSV.

The brief, now

With three summaries in front of you, the brief writes itself in forty minutes because there's nothing left to look up. You read, you cross-reference (the onboarding pattern from sub-agent 1 and the step 3 drop-off from sub-agent 2 say the same thing), you spot the angle (the auto-config no competitor holds), you write.

You're no longer the PM who collects. You're the PM who decides.

Why Claude Code sub-agents work

What we just did rests on the three properties from above. If you've understood them, you know when to launch a sub-agent and when to stay in the main conversation.

The isolated context. Sub-agent 1 probably processed 80,000 tokens (verbatims, tickets, call notes). All of that stayed with it. You, in your main conversation, only received the final summary, around 600 tokens. Your quota is untouched, your readability too.

Parallelism. The three sub-agents ran simultaneously. Seven minutes total instead of fifteen in series. That's what turns a morning into thirty minutes when you have several independent sources.

Restricted tools. Sub-agent 1 only had Read and Grep (no risk of modifying a file). Sub-agent 2 only had the analytics MCP. Sub-agent 3, only WebFetch. No one can mess up outside their scope. Same logic as access rights in a company: by default, the minimum.

What if you're not a PM?

Discovery is just one case. Any role that goes through "I collect three sources, I synthesize, I decide" can apply the same mechanics. Three examples to help you transpose.

HG
Head of Growth Retro
You're prepping the quarterly acquisition retro. Three sub-agents in parallel: one on qualitative signals (interviews, NPS), one on data channel by channel, one on what competitors push in paid.
"Launch a sub-agent that audits LinkedIn Ads performance for Q1, give me CTR, CPL and CPA per audience and identify the winning patterns."
FS
Solo founder Ideation
You're validating a product hunch before coding. Three sub-agents: one on Reddit and forum signals about the pain, one on direct and indirect competitors, one on early-user conversations in your DMs.
"Launch a sub-agent that scans r/sysadmin and r/devops over the last 90 days, give me the recurring patterns of complaints touching [domain]."
HO
Head of Ops / RevOps Audit
You're investigating a churn signal or process friction. Three sub-agents: one on support tickets, one on CRM data, one on sales meeting notes.
"Launch a sub-agent that crosses Q1 'cancel' support tickets with the latest sales interactions, identify 3 early signals."

The pattern is always the same: three heterogeneous sources, three sub-agents in parallel, one synthesis that lets you decide without having collected yourself.

From ad hoc to custom agent: codify what works

Discovery is something you probably do twice a month. Rather than rewriting the same brief every time, you codify it in a custom agent. It's a markdown file you drop in .claude/agents/, and Claude Code picks it up on its own next time you say "do me a discovery". Same logic as your Skills library, but for full roles, not just workflows.

Here's what the discovery-pm.md file looks like:

~/.claude/agents/discovery-pm.md
---
name: discovery-pm
description: Use to scope a product discovery. Aggregates customer voice, data, and competition on a given topic.
tools: Read, Grep, Glob, WebFetch
model: sonnet
---

You are a discovery analyst. When given a topic:

1. Customer voice: explore Confluence exports, Teams notes, Jira tickets. Output 3 patterns with verbatims and criticality.
2. Product data: if an analytics MCP is available, output adoption, funnel, cohort comparison.
3. Competition: web search across 4 competitors, comparative table, opportunity.

Always cite the source. Never invent a metric. If you can't find it, say so.

From there, the next discovery looks like this:

@discovery-pm do me a discovery on [new topic], sources Confluence/Teams/Jira and competitors Acme/Bolt/Cetra/Drift.
Custom agent invoked · 3 sources, 4 competitors, standard format
Discovery in progress · summary in 8 minutes.

You just turned an hour of verbal briefing into a one-liner. And the agent will keep the same rigor on the tenth time as on the first.

The classic traps

Everyone goes through at least one of these three pitfalls when getting started.

!
The vague brief
Too vague

Check if there's anything to fix in the document.

The agent doesn't know what you're looking for. It comes back with a generic list of "points of attention" that's useless.

Explicit brief

Read this memo. Play the role of a skeptical investor. Output the 3 weakest arguments with a rewording suggestion for each.

Clear objective, defined method, scoped output format.

!
Over-recruitment
20 agents

One agent for review, one for docs, one for security, one for migration, one for release notes, one for... Claude doesn't know which one to pick anymore and ends up doing it itself.

2 or 3 agents

You add a new custom agent only when you've redone the same ad hoc brief twice in a week.

A role that fits in one clear sentence, no overlap.

!
Tools too broad
All tools

You create a reviewer agent and let it have all default tools. The day you ask it to review a doc, it rewrites it.

Minimum tools

A reviewer agent should only have Read and Grep. A web research agent only needs WebFetch.

What you don't give, it can't use.

Three levels of delegation, to keep in mind

Three numbered cards: 1. Main conversation (you who pilot, the full context, all tools, session memory). 2. Ad hoc sub-agent (brief for a task, fresh context, scoped output, disposable after use, perfect for an exploration). 3. Custom agent (the brief codified, a reusable .md file, Claude picks it on its own when the context fits). From pilot to delegated task to job description.

Three levels that don't replace each other, they stack.

You always start with the main conversation. You're the pilot, you decide what to delegate.

You launch an ad hoc sub-agent when you sense you're about to load your context for nothing (10 files to read, broad web research, an audit that needs a fresh look).

You codify into a custom agent when you realize you redo the same brief twice a week. Build one, not ten: start with one, see how it holds, then expand.

The reflex to keep

When you hesitate between doing it yourself and delegating, launch a sub-agent. It's cheaper in mental load than in tokens.

The rule that works: if the task fits in a five-line written brief, and the output is a deliverable you can read, it's a job for a sub-agent. If you need quick back-and-forth or to build a shared understanding, you stay in the main conversation.

Sub-agents don't replace your main conversation. They protect it. The more you offload what can be offloaded, the more it stays clear, short, sharp. And the sharper it is, the better your decisions.

This week, one concrete action

Pick a task you do twice a month that eats up a morning (review of a deliverable, competitive scan, audit). Brief a sub-agent for that task, run it once.

If it works, encapsulate the brief in a custom agent and store it next to your Skills. You just hired a free team member, available next Monday at 9am.

Beyond productivity, this is a posture shift. The PM who spends two days collecting before every kick-off no longer does the same job as the one who spends two hours on it. Not because they're worse. Because they do by hand what others delegate. Sub-agents don't make you faster. They make you available again for the real questions: why are we tackling this problem, what do we decide, what do we own.

The right reflex isn't to delegate everything. It's to know what deserves to stay in your head, and what can leave.

Next step on the path: my full setup, where we assemble Skills, MCP and sub-agents into a system that follows you from prompt to product.

If this article saved you time,

it'll save time for someone in your network.

ShareLinkedIn