March 29, 2026

Your Product Agent Keeps the User in the Room.

Features are costs until they deliver value. The Product agent knows the difference.


Your team is debating what to build next. Someone says "competitors have it." Someone else says "a customer asked for it." A third person has a great idea they thought of in the shower. Without a framework, the loudest voice wins — and the loudest voice isn't always right.

The Product subagent brings a user-first lens to every discussion. It doesn't build — it decides what's worth building, for whom, and in what order.

Start With the Problem, Not the Solution

Before anyone starts designing or coding, the Product agent asks the question nobody wants to answer: "What problem are we solving? Can we state it in one sentence without mentioning the solution?" If you can't, you're building a solution looking for a problem — and that almost never ends well.

It digs deeper: Who has this problem? How often do they hit it? How painful is it? What do they do today without this feature? What happens if we just don't build it? These questions feel annoying in the moment, but they prevent the team from spending weeks building something nobody uses.

Validation matters more than opinions. Five user conversations reveal 80% of the patterns. Usage data shows where people actually spend time — and where they drop off. Support tickets reveal the top complaints. The Product agent weighs evidence over intuition.

Cutting Scope Is the Superpower

Scope is the number one variable in shipping. Cut scope before cutting quality or extending timelines.

The approach: start from zero and add what's essential. Don't start from "everything" and try to cut. Phase 1 is the core loop, even if it's ugly. Phase 2 is quality and edge cases. Phase 3 is delight. And "V2" is not a plan — it's a graveyard. If a feature matters, it's in v1. If it doesn't, it's probably never.

When prioritization gets fuzzy, the Product agent uses RICE: Reach (how many users?), Impact (how much behavior change?), Confidence (how sure are we?), Effort (how many person-weeks?). It distinguishes "on fire" (broken, losing revenue — drop everything) from "high leverage" (small effort, big reach — do soon) from "trap" (large effort, uncertain impact, "but competitors have it" — almost always no).

Measure Outcomes, Not Outputs

Every feature needs a success metric before development starts. If you can't measure it, you can't prove it worked. Leading indicators (activation rate, feature adoption) predict success. Lagging indicators (retention, revenue) confirm it. Guardrail metrics (performance, error rate) ensure you don't break existing things.

The Product agent pushes the team to set a target before launch: "We'll know it's working if X increases by Y% within Z weeks." If the metric doesn't move after launch, the feature failed — and having the courage to revert or iterate is what separates good product teams from bad ones.


The teams that build what users want aren't better at guessing. They're the ones where user feedback has a seat at every table.