Claude Code sub-agents: deliver a discovery in 30 minutes
A Product Manager's full walkthrough: how to brief three Claude Code sub-agents in parallel (customer voice, product data, competitors) to turn a morning of research into forty minutes of synthesis.
The impossible Monday morning brief
Monday, 9am. The founder pings you: "Customers keep bringing up onboarding. Can we tackle it? I need a brief by Wednesday."
You know that "customers keep bringing it up" means everything and nothing. Before scoping a solution, you have to scope the problem. That's the Product Manager's job: not delivering a roadmap, understanding what we're actually trying to solve.
Before sub-agents, your day looked like this:
- 9:30am. You open Confluence, dig through the product space for old specs and user research notes.
- 10:30am. You jump to Jira, tour the customer tickets on the topic, copy the most telling ones into a doc.
- 11:30am. You scan the Teams notes from sales calls over the last two months.
- 2pm. You run a query in product data, export to CSV. You roll into the competitive scan.
- Tomorrow morning, you write the brief.
A day and a half. You've collected more than you've thought.
With Claude Code sub-agents, the same day fits into forty minutes. You brief three agents in parallel, each on one source. You head into your 1:1, a customer call, lunch. You come back at 2pm, three sourced summaries are sitting in front of you. No more searching. You read, you cross-reference, you decide.
That's what we're going to walk through, step by step.
What a Claude Code sub-agent actually is
A sub-agent is a fresh Claude Code instance you launch from your main conversation. It knows nothing of what you've discussed before. It receives a brief, does its job in its own context, hands you back a summary, disappears. It's the equivalent of the Task tool in Anthropic's docs.
Three properties to know:
- It has its own context. Everything it reads (50 files, 30 Jira tickets, 15 web pages) stays with it. Your main conversation only sees the final summary.
- It can run in parallel. You launch three sub-agents in the same message, they work simultaneously. Wall time gets divided.
- You choose its tools. A sub-agent that should read files doesn't need to write. A sub-agent doing web research only needs WebFetch. You restrict, which avoids surprises.
In Claude Code, you trigger a sub-agent by asking explicitly, in natural language:
You can also type /agents to see the list of your custom agents and pick one. Or @agent-name mid-message to invoke a specific agent. Same logic as your Skills: Claude loads what it has on hand and triggers it when the context fits. More on that below.
The walkthrough: your three sub-agents on a discovery
Here's how the founder's discovery actually plays out, in forty minutes.
Sub-agent 1: customer voice
You start with what matters most: what are customers actually saying? You open Claude Code and type:
./confluence/, the call notes in ./teams-notes/, and the Jira export ./jira-export.csv. Find anything that mentions onboarding.
Give me back: (1) the 3 most frequent recurring patterns, (2) two customer verbatims per pattern with the exact source, (3) a criticality estimate based on tone and frequency. Markdown format. No invention: if you don't find it, say so.
Three minutes later, here's what comes back:
"We dropped the onboarding after 3 days, the team disengaged." · Sales call Acme, 2026-03-15
"My sales reps only use 30% of the platform." · Jira ticket #4521
Criticality: high (mentioned in 4 out of 7 churns this quarter).
Pattern 2: missing auto-config · 28 mentions, 8 customers
"Every new admin has to redo the same 40 clicks." · User research note, 2026-02-20
"We want to be able to clone our setup." · Confluence/feedback-q1.md
Criticality: medium (recurring frustration, not blocking).
Pattern 3: reporting too technical · 15 mentions, 5 customers
Verbatims and sources provided.
Criticality: low (request, not complaint).
You haven't read a single ticket. You've got a clean, sourced summary, with verbatims and a criticality estimate you can directly challenge if it feels off.
Sub-agent 2: product data
While the first one runs, you launch the second right away. This time, the agent goes after data through the MCP you've connected to your analytics tool.
Five minutes:
Activation funnel: 41% drop-off at step 3 (initial config).
Cohorts: < 6 months → 22% adoption · > 12 months → 51% adoption.
Insight 1: friction is concentrated in the activation phase, not in recurring usage. Accounts that pass step 3 stick around.
Insight 2: the gap between recent and older cohorts suggests an onboarding problem more than a product one.
Insight 3: the 38% who do adopt use it on average 4.2 times per week, which is solid. The problem is crossing the threshold, not perceived value.
This lines up with what sub-agent 1 said: customer voice talks about onboarding being too long, the data confirms a drop-off at activation step 3. You already have your problem statement without touching a single Excel sheet.
Sub-agent 3: competition
The third one tackles external scanning. There, you give it web access.
Seven minutes later:
Bolt: generous free tier, monetization on integrations. Self-serve onboarding but lightly guided. Gap: manual initial config, like ours.
Cetra: pure self-serve focus, claims "first value in 5 minutes". Gap from feedback: limited feature set beyond onboarding.
Drift: industry-templated approach. Gap: no fine-tuning possible.
Differentiating opportunity: none of them offers auto-config based on team profile. Bolt and Drift partly tried, didn't go all the way. Could be a positioning angle.
You have the three pieces. Customer voice, product data, competition. You haven't opened a tab, read a PDF, exported a CSV.
The brief, now
With three summaries in front of you, the brief writes itself in forty minutes because there's nothing left to look up. You read, you cross-reference (the onboarding pattern from sub-agent 1 and the step 3 drop-off from sub-agent 2 say the same thing), you spot the angle (the auto-config no competitor holds), you write.
You're no longer the PM who collects. You're the PM who decides.
Why Claude Code sub-agents work
What we just did rests on the three properties from above. If you've understood them, you know when to launch a sub-agent and when to stay in the main conversation.
The isolated context. Sub-agent 1 probably processed 80,000 tokens (verbatims, tickets, call notes). All of that stayed with it. You, in your main conversation, only received the final summary, around 600 tokens. Your quota is untouched, your readability too.
Parallelism. The three sub-agents ran simultaneously. Seven minutes total instead of fifteen in series. That's what turns a morning into thirty minutes when you have several independent sources.
Restricted tools. Sub-agent 1 only had Read and Grep (no risk of modifying a file). Sub-agent 2 only had the analytics MCP. Sub-agent 3, only WebFetch. No one can mess up outside their scope. Same logic as access rights in a company: by default, the minimum.
What if you're not a PM?
Discovery is just one case. Any role that goes through "I collect three sources, I synthesize, I decide" can apply the same mechanics. Three examples to help you transpose.
The pattern is always the same: three heterogeneous sources, three sub-agents in parallel, one synthesis that lets you decide without having collected yourself.
From ad hoc to custom agent: codify what works
Discovery is something you probably do twice a month. Rather than rewriting the same brief every time, you codify it in a custom agent. It's a markdown file you drop in .claude/agents/, and Claude Code picks it up on its own next time you say "do me a discovery". Same logic as your Skills library, but for full roles, not just workflows.
Here's what the discovery-pm.md file looks like:
name: discovery-pm
description: Use to scope a product discovery. Aggregates customer voice, data, and competition on a given topic.
tools: Read, Grep, Glob, WebFetch
model: sonnet
---
You are a discovery analyst. When given a topic:
1. Customer voice: explore Confluence exports, Teams notes, Jira tickets. Output 3 patterns with verbatims and criticality.
2. Product data: if an analytics MCP is available, output adoption, funnel, cohort comparison.
3. Competition: web search across 4 competitors, comparative table, opportunity.
Always cite the source. Never invent a metric. If you can't find it, say so.
From there, the next discovery looks like this:
You just turned an hour of verbal briefing into a one-liner. And the agent will keep the same rigor on the tenth time as on the first.
The classic traps
Everyone goes through at least one of these three pitfalls when getting started.
Check if there's anything to fix in the document.
The agent doesn't know what you're looking for. It comes back with a generic list of "points of attention" that's useless.
Read this memo. Play the role of a skeptical investor. Output the 3 weakest arguments with a rewording suggestion for each.
Clear objective, defined method, scoped output format.
One agent for review, one for docs, one for security, one for migration, one for release notes, one for... Claude doesn't know which one to pick anymore and ends up doing it itself.
You add a new custom agent only when you've redone the same ad hoc brief twice in a week.
A role that fits in one clear sentence, no overlap.
You create a reviewer agent and let it have all default tools. The day you ask it to review a doc, it rewrites it.
A reviewer agent should only have Read and Grep. A web research agent only needs WebFetch.
What you don't give, it can't use.
Three levels of delegation, to keep in mind
Three levels that don't replace each other, they stack.
You always start with the main conversation. You're the pilot, you decide what to delegate.
You launch an ad hoc sub-agent when you sense you're about to load your context for nothing (10 files to read, broad web research, an audit that needs a fresh look).
You codify into a custom agent when you realize you redo the same brief twice a week. Build one, not ten: start with one, see how it holds, then expand.
The reflex to keep
When you hesitate between doing it yourself and delegating, launch a sub-agent. It's cheaper in mental load than in tokens.
The rule that works: if the task fits in a five-line written brief, and the output is a deliverable you can read, it's a job for a sub-agent. If you need quick back-and-forth or to build a shared understanding, you stay in the main conversation.
Sub-agents don't replace your main conversation. They protect it. The more you offload what can be offloaded, the more it stays clear, short, sharp. And the sharper it is, the better your decisions.
Pick a task you do twice a month that eats up a morning (review of a deliverable, competitive scan, audit). Brief a sub-agent for that task, run it once.
If it works, encapsulate the brief in a custom agent and store it next to your Skills. You just hired a free team member, available next Monday at 9am.
Beyond productivity, this is a posture shift. The PM who spends two days collecting before every kick-off no longer does the same job as the one who spends two hours on it. Not because they're worse. Because they do by hand what others delegate. Sub-agents don't make you faster. They make you available again for the real questions: why are we tackling this problem, what do we decide, what do we own.
The right reflex isn't to delegate everything. It's to know what deserves to stay in your head, and what can leave.
Next step on the path: my full setup, where we assemble Skills, MCP and sub-agents into a system that follows you from prompt to product.
Continue reading
The quota wall: why your plan runs out mid-week
Claude Code context management: the 4 moves that keep you from hitting the quota wall and keep responses sharp Monday through Friday.
Building your Skills library
Organize, name, nest and maintain your Claude Code Skills. The guide to going from 3 scattered Skills to a structured library.
L’AI.ssentiel, every Friday
The AI signals that matter. For pros who already use AI.