A Product Leader's Operating Philosophy

How I Think About Product

Thirteen convictions shaped by building products, leading teams, and learning, sometimes painfully, what actually works.

Not a universal framework. One product leader's operating philosophy. Adapt what's useful.

On Strategy
01
Goals are not strategy.
"We want to grow 40% this year" is a goal. Strategy is the diagnosis of what's blocking that growth, and a coherent approach to overcome it. Most companies have goals and call it strategy.

→ When the team doesn't agree on what's actually blocking growth, they can't make good prioritization decisions. Diagnosis is 80% of the work.

From experience

I led a complete market pivot. We diagnosed which market segment actually needed us and deliberately skipped the easiest growth path: scaling in a segment we'd already proven. The market window for larger customers set the pace, not our comfort level. The pivot succeeded in 11 months. It wouldn't have happened if we'd started with "grow 40%" instead of "why aren't we winning, and when does the window close?"

02
A strategy without "no" is not a strategy.
Every real strategy implies things you've explicitly chosen not to do. Write down what you're not working on and why. Non-goals are as important as goals.
From experience

During a compressed delivery timeline, maintaining an explicit "what we won't do" list was the difference between shipping and spiraling. In another role, I chose one strategic customer segment over another, both valuable, both with strong business cases. "Both" wasn't realistic given our capacity. Saying that out loud, and choosing, unlocked the team.

03
The CPO's job is strategy. The team's job is to solve the problem.
Product teams with real ownership thrive on strategic clarity: the destination and the constraints. Without it, even the best teams stall, waiting for direction they shouldn't need to ask for. But this isn't a monopoly on strategic thinking. Teams must think strategically too, about their domain, their users, their trade-offs. The difference is level: the CPO sets the direction, the team owns the strategy for getting there.

→ Measure on outcomes, not output. Give the team the problem, not the solution.

From experience

The best product work I've seen happened when I gave the team a clear strategic diagnosis and an outcome to own, then stayed out of the solution. The worst happened when I gave vague direction and the team spent weeks guessing what leadership actually wanted. Strategic clarity isn't micromanagement; it's the precondition for autonomy.

On Discovery
04
Discovery is a continuous habit, not a project phase.
Quarterly research sprints are too slow. By the time insights are synthesized and presented, the team has already built something. Continuous discovery, regular customer contact built into the trio's rhythm, is how good teams stay calibrated. The frequency depends on context: in B2B, access often runs through CS and Sales. What matters is fresh insight, not a rigid schedule.

→ Discovery is a trio discipline: PM, designer, and engineer. The designer often sees interaction patterns and hesitation that never get articulated in interviews.

From experience

Running seven design sprints on a single product area, speaking with 35 customers, I discovered something humbling: we weren't just wrong about specific answers; we were wrong about which questions mattered. Regular exposure doesn't just correct your answers. It corrects your questions.

05
Customers will tell you what they want. They won't tell you why.
"What do you want?" gives feature requests. "Walk me through the last time you tried to do X?" gives real understanding of context, friction, and workarounds. Always interview for behavior, not preferences.

→ Use JTBD method. Never ask about hypothetical future behavior, only past behavior.

From experience

We were about to retire a legacy information channel. The internal team was adamant: users depend on this. So we went to the locations. Observed actual behavior. Visited the target demographic directly. The physical artifacts sat unused in warehouses. Stated preferences and observed behavior told opposite stories. We retired the channel. Zero complaints.

At one company, customer success maintained a list of "blockers for adoption": what prevented customers from going all-in. Useful as a signal. Dangerous as a diagnosis. Our job in product was to look past the blocker and find the underlying problem. The list told us where to look. Never what to build.

06
The opportunity lives in the gap between importance and satisfaction.
An opportunity is not any pain point; it's one that matters deeply to a customer who is poorly served today. High importance + low satisfaction = opportunity. Low importance or high satisfaction = not worth prioritizing, regardless of how loudly it's requested.
From experience

Facing competing demands from two customer segments, one was loud and well-organized, but already reasonably well-served. The other was quieter but deeply underserved on capabilities that mattered to their daily work. We chose the underserved group. Zero churn on either side. We understood both segments well enough to know the deprioritized one could wait. The loudest customers aren't always the most underserved.

On Prioritization
07
Output is the wrong unit. Outcome is the only unit that matters.
Teams that ship a lot and don't move the needle optimize for output. The right question is never "how many features did we deliver?" It's "what changed for customers because we delivered?" This distinction separates a product team from a feature factory.

→ Most product teams can't answer a basic question: are we delivering more value than we cost? Seven highly paid people, no clear view on impact. If you can't connect your work to a customer outcome, you're guessing, and expensive guessing doesn't scale.

From experience

When we replaced a 30-year legacy system, the question was never "how many features have we shipped?" The question was: can this customer run their entire operation on our product, without keeping the old system as a parallel backup? That single criterion, operational viability, shaped every team's priorities for a year. Features that didn't move customers toward full adoption got deprioritized, regardless of how "ready" they were.

08
The roadmap is a hypothesis, not a commitment.
A roadmap that never changes isn't connected to learning. Now/Next/Later makes the uncertainty gradient visible: we're confident about Now, directional about Next, betting on Later. Internally, this is non-negotiable. Externally, you need two things: a version customers can plan against, and the discipline to only commit to what you actually control. That means concrete timelines for Now, honest direction for Next, and silence on Later until it moves up.

→ The internal roadmap is a learning tool. The external roadmap is a trust instrument. Build trust by committing to less and delivering it consistently, not by presenting an ambitious timeline you'll quietly revise later.

From experience

I once presented a roadmap with fuzzy dates to a key customer. The room went cold. They needed to plan their own implementation, train their staff, budget for the transition. "Directional" meant nothing to them; they heard "we don't know." That taught me: the product team's uncertainty is real, but it's yours to manage. The customer deserves a commitment you can keep, not a window into your internal planning process.

Earlier, I'd inherited a product where the backlog was a graveyard of sales promises: features committed to customers before engineering ever saw them. The roadmap had become a contract. The fix wasn't better estimation. It was changing who decides what gets built, and learning to say "this is what we're delivering this quarter" instead of either overpromising or hedging.

09
Opportunity cost beats ROI as a prioritization lens.
"Will this generate positive returns?" is a low bar. "Is this the best use of our limited attention?" is the right question. Every yes to one thing is implicitly a no to everything else. Make the trade-off explicit.

→ For every initiative added to the roadmap: what are we removing, and are we agreed this is more important?

From experience

At a 4-person startup running a platform across 5 Nordic markets, every customer had legitimate needs: different pricing models, different integration requirements. Positive ROI cases everywhere. But a team of four can't serve five markets equally. We had to choose which needs to build for now and which to defer, knowing that the format we used to gather needs (top-10 lists per customer) was itself creating false expectations. The real discipline wasn't saying no. It was redesigning how we collected input so that "no" didn't feel like betrayal.

On Execution & Leadership
10
Writing is thinking. The document is a byproduct.
The real value of a PRD, a strategy kernel, or an OKR document isn't the artifact. It's that writing forces you to think. You think better when you write. You distill to what matters. A PRD an engineer can't challenge isn't specific enough. A 20-page PRD has confused thoroughness with clarity. Write to think, then make that thinking readable.
From experience

I watched someone follow our PRD template meticulously: every section completed, every field filled. The document was flawless. The thinking was absent. That's when I understood: templates can become a substitute for the work they're supposed to capture.

OKRs are the same trap, and early in my career, I fell into it. Days in offsites debating objectives that became wishlists. Half abandoned by mid-quarter. The problem was never the OKR framework. It was setting objectives before doing the diagnosis. When you skip the strategic thinking and jump straight to "let's set an OKR for that," you're guessing with confidence. The teams that used OKR-writing as a way to surface weak strategic thinking got sharp goals in hours. The teams that used it as a planning ritual got polished documents and no direction.

But this doesn't mean strategy needs to be slow. I treat strategy like a product: ship an MVP, a one-pager on day one, then iterate. An 80% strategy you can test and strengthen beats a perfect document that arrives three months late. The writing surfaces where your thinking is weak. Stakeholders can react to something concrete instead of debating abstractions. Build, measure, learn, applied to strategy itself.

11
Slow delivery is almost never an engineering problem.
Slow delivery is usually a combination of unclear problem formulations, late scope changes, insufficient discovery, and poor PM-engineering collaboration. Fix what's upstream before optimizing what's downstream.
From experience

We pivoted into a new market and delivered a complete platform replacement in 11 months. The whole company understood why speed mattered. A new commercial leader made it brutally clear: time to market isn't a metric. It's survival. That understanding changed everything upstream. Problem definitions were written before solutions were discussed. Strategic context was shared company-wide. Every team knew exactly what they were solving and why it mattered now. Product idealism quietly stepped aside. The market window wouldn't wait. The product organization exists to deliver on business goals. That lesson changed how I prioritize permanently.

12
The team's strategic clarity is your product.
If you're CPO and only you understand the strategy, you've built a bottleneck. The job isn't to have the clearest strategic view. It's to create shared understanding alive enough for the team to make good decisions when you're not in the room.

→ Test regularly: can team members explain why we prioritize what we do? Can they say no to a stakeholder request with a good reason? If not, the strategy work isn't done.

From experience

The clearest test I know: sit in on a stakeholder meeting where the team is asked to take on something off-strategy. If they can explain why it doesn't fit, without checking with you first, the strategic clarity is real. If they hesitate, defer, or say yes to avoid conflict, the strategy only lives in your head. I've seen both. The difference is never the team's capability. It's whether the leader did the work of making the strategy shared.

On Reality
13
The tools are the map, not the territory.
Product work is messy. Insight comes from a support call, a Slack thread, and a half-finished note, not from a completed template. High-performing teams understand this: they use structure to make their thinking visible to others, not to replace it. A PRD is valuable because it forces you to articulate the problem, not because it's complete.

→ Never ask "did we fill out the template?" Ask "do we understand enough to make a good decision?" Process is scaffolding, not the goal.

From experience

I never insisted on perfect PRDs. Sometimes we didn't write one at all. The templates and processes I put in place were training wheels: scaffolding for PMs and teams who needed structure to develop their thinking, not a compliance checklist. The best teams outgrew the templates quickly. They still used the agreed formats, but loosely — one PM basically ran discovery documentation through Slack canvases, and it worked because the thinking was sharp. The teams that struggled weren't the ones who skipped the template. They were the ones who filled it out without thinking.

Understanding customer problems and testing potential solutions before committing to build. Runs parallel to Delivery, always. Answers: Are we building the right thing?

Delivery

Building and delivering validated solutions with high quality and predictable pace. Answers: Are we building it right?

Opportunity

A customer need, problem, or desire that, if addressed, would move a meaningful outcome. Framed as a problem, not a solution. "Users struggle to understand their own spending" is an opportunity. "Add a spending dashboard" is a solution.

Outcome

A measurable change in customer behavior or business results. Not a feature or deliverable. "30-day retention increases from 40% to 55%" is an outcome. "Launch onboarding wizard" is output.

Insight

A new understanding that changes what we believe we should build. Not a data point, but a conclusion. "Users churn" is not an insight. "Users who don't complete the core workflow in their first session have 4x higher churn" is an insight.

Struggle Moment

The specific, concrete episode where a customer experienced a problem with the current solution. The unit of insight in discovery. Named after a real person, in a real context, with real friction.

A structure linking outcome → opportunities → solutions → experiments. Keeps Discovery connected to strategy. Prevents insights from remaining loose anecdotes.

How much time the team is willing to spend on a problem, not how long it will take. Constrains scope rather than growing with it. Ryan Singer's concept from Shape Up.

Avoid Use instead
"MVP" "Experiment" or "smallest bet that tests this assumption"
"Requirements" "Opportunity" or "problem definition"
"We know our customers" "We talked to N customers in the last 30 days"
"Best practice" "What worked at [company] in [context]"
"Agile" Name the specific practice: sprint, Shape Up, kanban

How the thinking connects

Product work isn't a collection of independent decisions. Each step shapes what's possible in the next. This is the sequence I follow, and the one this playbook is built around.

1
Diagnose the real challenge

What's actually blocking progress? Not goals, not aspirations. The specific obstacle. This diagnosis shapes every decision downstream.

Strategy Kernel

2
Set the strategic filter

Does this delight customers? Is it hard to copy? Does it improve the business? If it doesn't pass, it doesn't compete for attention, no matter how many people ask for it.

3
Find the real opportunities

Talk to customers. Understand what they actually do, not what they say they want. Map opportunities to outcomes. The struggling moment is where insight lives.

Discovery

4
Pick the best bets, and say no to the rest

Score what you've validated. Sequence in three horizons: committed, directional, exploratory. Protect the high-leverage work from the tyranny of what's urgent.

5
Validate before you build

Write the problem statement first. If you can't articulate why this matters, you're not ready. Map assumptions. Test the riskiest one. Kill what doesn't survive.

Toolkit phases

6
Build with a time budget, not a time estimate

Set how much time this is worth. Shape the scope to fit. If it doesn't ship in the budget, stop, don't extend. That's a signal, not a failure.

After launch: go back to step 1. Does the portfolio build durable advantage, or are we shipping things anyone can copy?

Decisions

How decisions get made

Four risks to manage and a strategy kernel to align on. Every product decision passes through this lens.

Four Product Risks · Strategy Kernel · Prioritization · Anti-patterns

Four Product Risks

All product decisions are risk management decisions. These four risks must be addressed early, not after something is built. Every checkpoint is fundamentally a check on the status of these four risks.

Value Risk
Will customers want this, use this, and pay for this?
  • User interviews (JTBD method): behavior, not preferences
  • Importance-satisfaction analysis: high importance + low satisfaction?
  • Willingness-to-switch test: would they leave what they use today?
Primary phase: Discover → Define
Usability Risk
Will users actually understand and use the solution, without our help?
  • User testing with lo-fi and hi-fi prototypes
  • Observation studies: watch users navigate without instructions
  • "First session" analysis: what do new users do in the first 5 minutes?
Primary phase: Design
Business Risk
Is the solution sustainable from a business perspective? Does it support strategy and deliver sufficient ROI?
  • DHM filter: Delight customers? Hard to copy? Improves margin?
  • Opportunity cost: what are we not building by choosing this?
  • GTM readiness: do Sales, CS, Support know what we're launching?
Primary phase: Define + Deliver
Technology Risk
Can we actually build this, with the capacity, technology, and timeframe we have?
  • Technical feasibility spike early in Design
  • Architecture review with engineering lead at Solution Review
  • Appetite assessment: is this worth 2 sprints? 5? 10?
Primary phase: Design → Deliver

Strategy Kernel

Diagnose the critical challenge the company faces and formulate a coherent strategic direction, before teams set OKRs. OKRs flow FROM the diagnosis, not the other way around.

  • Diagnosis — What is the critical challenge? Simplify reality into a manageable description of what's actually going on.
  • Guiding policy — What is our approach? One direction that excludes alternatives and explicitly says what we do NOT do.
  • Coherent actions — 2–4 reinforcing actions that make sense together, not just individually.
  • Business metrics (retention, churn, revenue per segment)
  • Customer insight (interviews, NPS trends, support data)
  • Market data (competitor moves, regulatory changes)
  • Tech debt assessment from engineering
  • Board priorities and growth ambitions
  • 4–6 weeks before planning: CPO gathers inputs
  • 2–3 weeks before: CPO drafts the kernel
  • 1 week before: Leadership reviews diagnosis
  • Planning: CPO presents; teams derive OKRs
Strategy Anti-Patterns

These feel like strategic work. They're not. I've fallen into most of these, the first one especially.

Anti-pattern What it looks like Fix
Goals as strategy "Grow 40% this year" with nothing about how or why not Diagnosis first: what's actually blocking growth?
Vision without strategy Mission deck, values wall, no choices made Skip the vision deck. Start with the diagnosis.
Procrastination as busyness "I don't have time for strategy," calendar full of status meetings Fear of formulating a bad one. Block 2 days per quarter.
ROI-only prioritization "Is this worth doing?" and the answer is always yes "Is this the best use of our time?" Opportunity cost frame.

Prioritization

Prioritization isn't a scoring exercise; it's a series of filters. Each layer removes options that shouldn't compete for attention.

Layer 1: Strategic filter

Does this opportunity align with the strategic diagnosis? If the guiding policy says "focus on segment X," a feature that only serves segment Y doesn't pass, regardless of how many customers ask for it.

  • Market access — Required for sales, pilots, or compliance
  • Customer differentiation — Makes us clearly better than alternatives
  • Scale value — Enables growth, efficiency, or operational leverage
  • Internal hygiene — Security, robustness, tech debt reduction

If an opportunity doesn't fit any category, or only fits "internal hygiene," question whether it belongs in this cycle.

Layer 2: Opportunity assessment

Before scoring solutions, assess the problem. Two dimensions matter:

How important is this job-to-be-done for users, and how well is it served today? High importance + low satisfaction = real opportunity. Low importance or high satisfaction = noise, regardless of volume.

  • Must-have: Without this, the product doesn't work
  • Performance: Better execution = more satisfaction
  • Delighter: Unexpected value, competitive edge

Kano categories shift over time. Today's delighter is next year's must-have. Reassess each cycle.

Layer 3: Solution scoring (ICE)

Once you've identified real opportunities, score the proposed solutions. ICE is simple enough to be useful, transparent enough to be challenged.

How much will this move the key result? Grounded in evidence, not hope. Before you score, write one sentence: what specific difference does this make for a real user? "Operations staff complete the core workflow in 2 clicks instead of 7" is impact. "Impact: 8" is not. If you can't describe it in plain language, you don't understand it well enough to score it.

How strong is our evidence? Validated with users = high. Team assumption = low.

How much does it cost to build and ship? Include design, dev, QA, rollout.

ICE = (Impact × Confidence) / Effort. The score is a conversation starter, not a decision. If the top-scoring item doesn't feel right, interrogate the scores, don't ignore the feeling.

Layer 4: Explicit non-goals

Prioritization is incomplete without saying what you choose not to do. Every planning cycle, write down 2–3 things the team will explicitly not pursue, even if they're important. This protects focus and gives the team permission to say no.

Evidence maturity gate

Don't score what you haven't validated. Track where each opportunity sits:

  • Idea: Assumption only. Needs discovery before scoring.
  • Validated problem: Confirmed through user insight, data, or support patterns.
  • Validated solution: Solution tested positively with users.

Scoring an "idea" with ICE produces fiction. Move it to "validated problem" first, then score.

From experience

Early on, we tried scoring everything in a spreadsheet, impact, effort, reach, confidence, for 80+ items. Nobody trusted the scores. They were gut feel dressed up as math. What worked: first filtering by strategic fit (does this serve the pivot?), then assessing the problem (is this important and underserved?), and only then scoring the remaining solutions. The filtering removed 60% of items before we ever scored anything. The conversations got better because we were comparing 8 opportunities, not 80.

Prioritization anti-patterns

Score everything, decide nothing

80 items in a scoring spreadsheet where every item is 7/10. If the scores don't create clear separation, the framework isn't helping; it's hiding the hard conversation.

Loudest customer wins

Prioritizing based on who complains most. The loudest customers aren't the most underserved; they're the most organized. Importance × satisfaction reveals what volume doesn't.

Scoring without evidence

Impact 8, Confidence 7. Based on what? If the confidence score isn't tied to actual evidence (user interviews, data, experiments), ICE becomes a consensus exercise, not a prioritization tool.

No non-goals

A priority list without explicit non-goals is a wish list. If you can't name what you're choosing not to do, you haven't actually prioritized.

Discovery

Continuous, not periodic

Discovery is a weekly habit, like exercise. Quarterly research sprints are too slow. By the time insights are synthesized, the team has already built something based on assumptions.

Triage · Dual-track · Discovery rhythm · Opportunity Solution Tree · Anti-patterns

Triage

Not all work needs discovery. The first question is: Is the problem and solution already known?

Fast Track

Known problem, known solution

All three must be true:

  • Problem is known — no user research needed
  • Solution is obvious — no design ambiguity
  • Scope is limited — ≤ 5 working days

Requirements

  1. Trio approves scope (conversation)
  2. Build & ship
  3. Show result at Impact Review
Bugs Tech debt Compliance Integrations
Full Process

Uncertainty exists. Discovery needed.

One or more of these are true:

  • ? Problem is unclear — needs user research
  • ? Solution is ambiguous — needs exploration
  • ? Scope is large or uncertain

Follow the six phases in the Toolkit with three checkpoints. No initiative moves forward without explicit go/no-go.

Dual-track: two lanes, always running in parallel

Discovery Track
Are we building the right thing?
  • Understand who customers are and what problems they have
  • Identify which problems are important enough to change behavior
  • Test solution directions quickly and cheaply: prototypes, experiments
  • Learn continuously, not once per quarter
Delivery Track
Are we building it right?
  • Translate validated design into working, production-quality code
  • Quality assurance throughout, not just at the end
  • Deliver with predictable pace
  • Make rollback easy if something goes wrong
While one initiative is in Delivery (sprint 1, 2, 3...), the team simultaneously works on discovery for the next opportunity. This gives predictable pace (Delivery never waits for Discovery), better decisions (Delivery always starts with validated insight), and continuous learning.

Discovery rhythm

The goal isn't a rigid weekly schedule. It's that the trio always has fresh customer insight to work from. Not stale research. Not assumptions. Recent contact with real users.

Regular customer contact

At least one customer interaction per week is the ambition. The trio (PM, designer, engineer) shares the exposure. In B2B, access often runs through CS and Sales. What matters is that someone on the team talked to a real user recently.

Synthesize immediately

10 minutes after every call. What surprised me? What quote do I want to remember? Was my hypothesis confirmed, challenged, or more complex than I thought? What changes in the opportunity tree? Insight decays fast. Capture it while it's sharp.

Share weekly

One key learning shared with the team each week: one paragraph, one insight. Update the OST. Note which assumptions were confirmed or challenged. This keeps discovery visible and accountable without ceremony.

Opportunity Solution Tree

The structure that keeps Discovery connected to strategy. Every customer insight lands here. Prevents findings from remaining loose anecdotes.

Outcome: Increase 30-day retention 40% → 55%
└──
Opportunity: New users don't reach core value before they give up
├──
Solution: Guided setup wizard
└──
Experiment: A/B test wizard vs. blank slate → measure day-1 core workflow completion
└──
Solution: Pre-built templates at signup
└──
Experiment: Offer 5 templates at registration; measure first-session completion rate
├──
Opportunity: Users set up the product wrong for their use case
└──
Solution: Use-case segmentation at signup (3-question quiz)
└──
Experiment: Measure 14-day retention delta vs. control group
└──
Opportunity: Product doesn't deliver visible value fast enough
└──
Solution: Real-time preview mode in empty states

From opportunity to experiment

Each opportunity generates assumptions. The riskiest assumption gets tested first. The tree only works if it's visible to the trio and reflects what the team currently believes, not what they believed last quarter.

Opportunity: New users don't reach core value before they give up
├── assumptions
Assumption: Users don't understand what core value looks like (Desirability, critical, low evidence)
└── riskiest → test first
Experiment: 5-second test: show empty state vs. pre-filled demo; measure "I understand what this does"
├──
Assumption: Onboarding friction is the cause, not product complexity (Feasibility)
└──
Assumption: Users who reach core value in session 1 retain 2x better (Viability)
Compare-and-contrast

Always test 2+ solutions per opportunity. Testing one solution invites confirmation bias. Comparing two forces you to understand which works better and why.

Anti-pattern

The tree lives in the PM's personal notebook. The trio has never seen it. Discovery insights go in, nothing comes out. A filing cabinet, not a thinking tool.

Discovery anti-patterns

These are the ways discovery goes wrong without anyone noticing. Each one feels like progress but produces no real insight. I've seen every one of these in practice, some more than once.

Anti-pattern What it looks like Fix
Validation theater "Run a quick study to validate," at the end, after the decision is made Falsify, don't validate. Design research to prove yourself wrong.
Middle-range research Interesting findings that don't change any decision Go macro (strategic) or micro (usability). Kill the middle.
Opinion interviewing "Would you use this?" "Do you like this feature?" Only past behavior counts. "Walk me through what you did last time."
Shallow interviews "Great, tell me another story," breadth without depth Follow the thread. "What happened next?" "Why did you do it that way?"
Solution-as-opportunity "Add a dashboard" appears as an opportunity in the OST Reframe as unmet need: "Users can't see their progress."
Feature-request backlog Customers said they want X, so it goes on the roadmap Check: did anyone actually churn over this? Stated need ≠ real need.

Signs the system is working, or broken

Working
  • PMs can name 3+ customers they talked to in the last 2 weeks
  • OST is updated, not a static doc from last quarter
  • Roadmap decisions reference named customer episodes
  • Solutions in backlog can be traced to specific opportunities in the tree
  • Team knows which assumptions are still unvalidated
Broken
  • "We know our customers," but nobody interviewed one this month
  • Discovery is a sprint that happens right before a planning meeting
  • Customer quotes are always from the same 2–3 power users
  • OST lives in one person's head, not a shared, updated document
  • PRDs written without an opportunity brief
Toolkit

A library of tools for teams that need structure

Process phases, operating rhythm, and templates. Use what's relevant, skip the rest.

6-phase process · Operating rhythm · Templates

Process Reference

Six phases from Objective to Launch. Strategy lives in Decisions. Route work via triage first; not everything needs all six phases.

01
Phase 01

Objective

Define clear, measurable ambitions for the team for the coming period, derived from the strategic kernel, so everyone knows what they're working toward and why.

  • ?What are company ambitions this period, and what do they mean for our team?
  • ?What must we prove or achieve for the company to take the next step?
  • ?What are the critical assumptions we're operating under?
  • ?What are we not focusing on, and is everyone aligned on that?
  • Review the strategic kernel
  • Define team Objectives and Key Results for the period
  • Fill out Team Overview (teamoversikt.md)
  • Identify dependencies to other teams
  • Book strategic planning meeting with stakeholders
OKR Format
Good Objective:
  • • Qualitative: describes direction, not numbers
  • • Tied to a real business problem
  • • Achievable within the period
"Make it easier for new customers to experience core value quickly"
Good Key Result:
  • • Quantitative and measurable
  • • Tied to outcomes, not output
  • • Ambitious but realistic (~70% probability)
"Increase % of new users who complete core workflow day 1 from 30% to 55%"
Checkpoint — Ready for Discover
  • Team OKRs defined and approved by CPO
  • Team Overview filled out and shared with relevant stakeholders
  • Shared understanding of what's in and out of scope for the period
  • Dependencies identified and communicated to other teams

Operating Rhythm

Five rhythms that keep the model alive. Three are checkpoints. Nothing moves forward without an explicit decision. For people development, see People.

Formal checkpoints work best as training wheels, not guardrails. Mature teams do informal alignment naturally; less experienced teams use the structure as scaffolding until they don't need it. If you still need formal gates after a year, the problem isn't process. It's trust.

Quarterly+

Strategic Planning

Rig the team for the coming period. Set OKRs. Clarify focus and non-focus.

Quarterly or per tertial
60 min
PM — required CPO — required Designer Engineering Lead Leadership team
  1. Company ambitions and strategy review (15 min)
  2. Team's proposed OKRs (20 min)
  3. Discussion: Are these the right bets? (20 min)
  4. OKR approval and focus confirmation (5 min)
Artefact: Completed teamoversikt.md: strategy, OKRs, learning goals, focus, non-focus, and team dependencies.
Checkpoint 1 — Kickoff

Kickoff

Confirm the problem is well enough defined to begin designing solutions. The trio owns the decision. Stakeholders are invited for input, not approval.

Per initiative
60–90 min
PM — presents Designer Engineering Lead CPO (informed) CTO (if relevant)
  1. Goals and success criteria (15 min)
  2. Context and user insight: the struggle moment (20 min)
  3. Scope: what's in and explicitly what's out (15 min)
  4. Four risks identification and plan (15 min)
  5. Go / No-go decision (15 min)
  • Problem based on fresh user insight, not assumptions or stale research
  • Struggle moment identified and shared. Everyone in the room understands the problem
  • Scope clear: in and out explicitly defined and agreed
  • Problem linked to a specific OKR
  • Four risks identified with a plan to address them in Design
  • All key participants aligned on prioritization
Checkpoint 2 — Solution Review

Solution Review

Review the proposed solution with key stakeholders and confirm alignment, before engineering starts development. Errors found here cost the least to fix.

Per initiative
60–90 min
PM — presents Designer — presents UX Eng Lead — presents tech CPO (informed) CTO (if relevant)
  1. Recap: the problem we're solving, and what changed since Kickoff (10 min)
  2. Solution presentation: PM, Designer, Engineering (25 min)
  3. User test results: who, what worked, what friction, what changed (15 min)
  4. Four risks status review (15 min)
  5. Go / No-go decision (15 min)
  • Solution solves the problem defined in Kickoff, no unexplained scope drift
  • Minimum 3 users tested and confirmed understanding and intent to use
  • All four risks addressed, or residual risk explicitly accepted with a monitoring plan
  • PRD complete: problem, solution, scope, success criteria, non-goals
  • Engineering confirmed technical feasibility and agreed on appetite
  • All key participants aligned on the solution direction
Checkpoint 3 — Launch Readiness

Launch Readiness

Ensure everything is in place for a controlled launch: product, documentation, communication, and the entire organization. Not a technical check, but a coordination check.

Per launch
1–2 weeks before
45–60 min
PM — leads Engineering Lead Designer CS / Support Lead CPO (informed) Marketing (if relevant)
  1. Product status: open bugs, monitoring, rollback plan (15 min)
  2. Documentation and training: CS/Support briefed? (10 min)
  3. Communication plan: who tells what to whom (10 min)
  4. GTM status: Sales, CS, Support, Marketing (10 min)
  5. Go / No-go decision with explicit begrunnelse (10 min)
  • All critical bugs resolved
  • Monitoring and alerting active
  • Rollback plan tested or verified
  • Documentation published or date set
  • CS and Support briefed and ready
  • Communication plan approved
  • Unresolved critical bug
  • CS/Support not briefed
  • Rollback plan missing
  • Critical legal or security check not completed
Post-launch

Impact Review

Evaluate the effect of what we launched, understand why results came out the way they did, and decide what to do next. Not a results presentation, but a learning session.

2–4 weeks
post-launch
60 min
PM — leads Designer Engineering Lead CPO (informed) CS Lead Data / Analytics
  1. What were the success criteria? (5 min; prevents post-hoc rationalization)
  2. What happened? Metrics vs. goals (20 min)
  3. Why did it happen? Root cause analysis (20 min)
  4. What do we do next? Explicit decision (15 min)
Honest questions for impact review
1. Were we right? Not "are we satisfied" but "did our assumptions hold?"
2. What did we learn about users we didn't know? What changes in the OST?
3. What would we have done differently? In discovery, design, delivery?
4. What do we change in the next initiative? Process, not just product.

Deviating from goals is not failure; it's information. Failure is not learning from it.

Ongoing

1:1 & Coaching

Coaching is an operating rhythm: weekly 1:1s, monthly observation, quarterly reviews. See People for the full coaching model.

Weekly 1:1
Monthly observation
Quarterly review

Templates

Templates support the playbook. They're not the playbook itself. Use the sections that are relevant and skip the rest.

Set direction
Strategy
Strategy Kernel

Intentionally short (one page). Diagnosis of the critical challenge, guiding policy with explicit trade-offs, and 2–4 coherent actions. Input to OKR-setting.

↓ Download template
Objective (phase 01)
Team Overview

OKRs derived from the strategic kernel, learning goals, focus, non-focus, and dependencies. Single source of truth for team direction.

↓ Download template
Understand the problem
Discovery (ongoing)
User Interview Guide

Structured guide using JTBD method. Explore behavior, not preferences. Identify struggle moments. Use every week, not just at project start.

↓ Download template
Discovery
Opportunity Canvas

Take an opportunity from the OST and scope it for action. Connects the user insight to a business case, importance–satisfaction analysis, and a clear recommendation on whether to pursue.

↓ Download template
Discovery / Define
Opportunity Brief

Complete opportunity formulation: the checkpoint between discovery and PRD. Connects user evidence to business case with clear problem statement and success criteria.

↓ Download template
Discovery / Design
Assumption Mapping

Map and prioritize critical assumptions by risk level and evidence. Design experiments to test the riskiest ones first. Input to Solution Review.

↓ Download template
Make decisions
Kickoff
Pre-mortem

Identify risks the team knows about but hesitates to raise. Run in the last 20 minutes of kickoff. Surfaces blockers before they become surprises.

↓ Download template
All checkpoints (1–3)
Checkpoint Decision

Document Go/No-go decisions with risk assessment and conditions. Used at all three checkpoints. Same template, different context. Archive with the initiative.

↓ Download template
Define / Design
PRD

Product Requirements Document. Write to communicate, not to document. Specific enough for engineers to challenge. The checkpoint artifact for both Kickoff (problem) and Solution Review (solution).

↓ Download template
Design & validate
Design
User Test Results

Document findings from user testing: patterns, quotes, and severity. Key input to Checkpoint 2: Solution Review. Keeps evidence structured and actionable.

↓ Download template
Design / Deliver
Shaping Brief

Appetite, building blocks, and rabbit holes. Clarifies scope and trade-offs before build starts. Bridges the gap between validated solution and sprint planning.

↓ Download template
Build & ship
Deliver
Sprint Plan

Plan and align the team around sprint goals, focus areas, and acceptance criteria. Keeps scope decisions visible. Updated each sprint.

↓ Download template
Launch
Launch Brief

Pre-launch coordination document. Aligns Sales, CS, Support, and Marketing. GTM readiness checklist. The key artifact for Launch Readiness checkpoint.

↓ Download template
Learn
Post-launch
Impact Report

Metrics vs. goals, root cause analysis, and learning notes. Closes the loop on every initiative. Run 2–4 weeks after launch in Impact Review.

↓ Download template
People

You build products through people

Process without coaching is compliance, not product development. The model is half the job. Judgment is the other half, and judgment is developed through coaching.

What coaching is · Four judgment areas · Coaching rhythm · Maturity model · Anti-patterns

What coaching is — and isn't

A PM who follows every gate, template, and ritual perfectly but never involves users, never scopes for MVP, is a project manager with a fancier title. The difference is judgment. And judgment isn't built by process alone.

From experience

I gave a vocal skeptic full ownership of building our operating model. He followed every process step perfectly, gates, templates, rituals, but when he stepped into a PM role temporarily, he never involved users, never scoped for MVP. The model is half the job. Coaching is the other half.

  • Observation + feedback on concrete behavior
  • Questions that force reflection
  • Helping the PM see blind spots
  • Developing judgment over time
  • Status updates
  • Giving the answer
  • Approving work
  • Fixing this week's problem

Key principle: coaching is about making the PM better, not making the product better this week. The product gets better as a consequence.

Four judgment areas

You don't coach on process compliance. You coach on judgment in four areas.

Problem judgment

Can they separate symptoms from root causes? Do they prioritize based on evidence, not the loudest voice?

Weak sign: jumps to solutions before the problem is understood. "The customer said they want X" without asking why.

User judgment

Do they understand users deeper than users understand themselves? Jobs-to-be-done. Struggle moments. Context.

Weak sign: quotes what users said without interpreting what they meant. Takes feature requests literally.

Solution judgment

Do they evaluate against the four risks (value, usability, business, tech)? Scope for MVP? Kill their own bad ideas?

Weak sign: married to the first solution. No compare-and-contrast. Scope grows without anyone saying stop.

Communication judgment

Can they explain why, not just what? Adapt the message for the trio, stakeholders, leadership, customers?

Weak sign: PRD nobody understands. Stakeholders get surprised. Engineering doesn't know why they're building it.

From experience

A PM on my team was drowning. First enterprise customer, a flood of feature requests channeled through Customer Success, and pressure to deliver on everything. He was trying to prioritize between twenty "critical" items, struggling to decide on the right solution for any of them. In our 1:1, I didn't help him prioritize the list. I told him to visit the customer. Go sit with the actual users. Watch them work. He came back with several eureka moments: half the requests from CS didn't match what users actually struggled with, and the problems they did have pointed to a fundamentally different solution than what was on the backlog. That one visit reshaped both the priorities and the approach. No framework did that. Direct user contact did.

Coaching rhythm

Four cadences that keep people development alive.

Weekly 1:1
30 min

Not a status update. The PM owns the agenda. The product lead listens, asks questions, gives feedback.

  1. What's the most important decision you made this week, and what did you base it on?
  2. What are you most uncertain about right now?
  3. What did you learn from your last user contact?

Q1 reveals decision quality. Q2 reveals self-awareness. Q3 reveals whether the discovery habit is alive.

Weekly PM Chapter
45–60 min

All PMs together, across teams. One PM presents a real problem. The group challenges and discusses. No status updates, only craft discussions.

  • One PM presents a live challenge (10 min)
  • Group discusses and challenges (20 min)
  • Summary: what do we take forward? (5 min)
  • Rotating presenter each week
Observation
Monthly

Sit in on a user interview, usability test, or stakeholder presentation. Don't speak during the session. Give feedback afterward.

  • Does the PM ask open questions, or lead the witness?
  • Is the designer an active part of discovery?
  • Does the trio listen to the user, or confirm their own hypothesis?

Observation is the strongest coaching tool. A PM can talk about user insight in a 1:1 without actually doing good discovery; observation reveals reality.

Quarterly review
60 min

Structured review of the PM's development. Not a performance review, but a development conversation.

  1. What has the PM learned about their users this quarter? (15 min)
  2. Which decisions is the PM most/least satisfied with, and why? (15 min)
  3. What's the PM's key development area next quarter? (15 min)
  4. What does the PM need from the product lead to get there? (15 min)

From scaffolding to autonomy

Checkpoints are scaffolding for immature teams. Coaching is what makes teams eventually not need them.

Novice

Coach all four judgment areas. Observe frequently. Formal checkpoints with product lead present.

All four areas. Frequent observation.

Competent

Good problem understanding. Needs help with prioritization and communication.

Solution & communication judgment. Checkpoints owned by trio, product lead informed.

Autonomous

The trio makes good decisions independently. Challenge blind spots, don't coach.

Sparring, not coaching. Informal alignment only. No formal checkpoints.

Graduation criteria: last 3 initiatives had good problem definition without correction, the trio identifies risks proactively, stakeholders trust the team's decisions, and the PM can articulate what she doesn't know as well as what she does.

From experience

The hardest coaching moment I've faced wasn't a performance problem. It was a pivot. The organization was two weeks from launching in one market when the decision came to abandon it entirely and pivot to a different domain. People had built relationships with customers. Many felt the original mission was more meaningful. Economic arguments alone, even correct ones, weren't enough. I had to address meaning: reframe the new domain as genuinely underserved, connect people's sense of purpose to the new direction, not just the new business case. One person chose to leave, and I respected that. The rest moved from resistance to engagement. The lesson: when change threatens people's sense of purpose, coaching that ignores the emotional layer and focuses only on logic will fail.

What works and what doesn't

Book clubs

Shared frames of reference, new perspectives. But limited transfer to daily work. Works as supplement, not foundation.

Conferences

Inspiration, network, exposure to practice outside your bubble. Expensive and variable value. Pick 1–2 per year with clear purpose.

Courses & certifications

Can fill specific knowledge gaps (data, technical, design). But rarely what develops product sense.

From experience

I tried book clubs, conferences, frameworks, and certifications. Some added perspective. None built product sense. What actually worked: weekly 1:1s where we discussed real decisions on real problems, and a PM chapter where one PM presented a live challenge and the group tore it apart constructively. I also learned where coaching fails: when someone follows every process step but doesn't develop product judgment: involvement of users, MVP scoping, questioning assumptions. That gap becomes visible when the coaching stops. Process compliance without judgment is the failure mode, and it taught me that coaching must be continuous and tied to real decisions, not just offered as optional support.

Anti-patterns

Common ways coaching goes wrong.

1:1 as status meeting

If you spend 1:1s asking "what are you working on this week?", you're wasting both people's time. That's what a project tool is for.

Coaching only on failure

If you only give feedback when something went wrong, the PM learns to avoid mistakes, not to make good decisions.

Observation without feedback

Sitting in on an interview without giving feedback afterward is wasted time for everyone.

Coaching as hidden gate

If the PM experiences 1:1 as a place to "sell" her decisions to you, you've created a hidden checkpoint, not a coaching relationship.

Scale

One team is a model. Ten teams is an organization.

The operating model describes how a single product trio works. A CPO runs an organization of teams. Everything changes: who attends checkpoints, how dependencies are managed, how planning works across boundaries.

Team topology · Dependencies · Multi-team planning

Team topology as strategic context

Team topology isn't an org chart exercise; it's a strategic decision. It belongs in Fase 00 alongside the strategic diagnosis.

  • Domain model / architecture
  • Customer journey
  • Strategic diagnosis
  • Each team's mandate and problem space
  • Stream-aligned teams as default; platform teams where justified

Reference: Team Topologies (Skelton & Pais): stream-aligned, platform, enabling, complicated-subsystem.

From experience

At a B2B SaaS company, I worked closely with the CTO and CXO to design team topology. We used architecture diagrams and the core domain model to define 6 value stream teams covering the full customer journey. Each team got a clear mandate and problem area. It took a lot of time, and it's genuinely hard. The real lesson: you won't get the topology right on paper. It's only when the model is in use that you see where the challenges are.

Dependencies are design feedback

Cross-team dependencies are the #1 scaling problem. They're not a coordination challenge; they're a signal that team boundaries might be wrong.

Minimize by design
  • Autonomous teams own their stack
  • Domain boundaries define team boundaries
  • If a dependency blocks you regularly, the boundary is wrong
When unavoidable
  • Explicit API contracts (interface, SLA, versioning)
  • Dependency visible before work starts
  • Escalation path defined upfront

Anti-pattern: If you need a "dependency manager" role, you have an organizational design problem, not a coordination problem.

From experience

We ended up with teams that were strongly dependent on each other. One team owned the core product catalog; another needed that data to calculate correct pricing. That coupling only became visible once the teams started working, not when we drew the topology on a whiteboard.

Cross-team planning

Planning across multiple teams. Not a roadmap exercise, but a strategic alignment exercise. The cadence depends on context: quarterly, per tertial, or half-yearly.

Teams set their own OKRs, always in context of a strategic update at company/product level.

Each team plans for themselves first. PM leads, full team involved.

All teams review plans together. Tool: Teamoversikt template, covering strategy, OKRs, and explicit dependencies to and from other teams. Joint review surfaces dependencies early so they can be handled.

Drive the strategic context, facilitate joint review, ensure coherence, not approve plans.

From experience

Before we introduced joint reviews, each team planned in isolation. Dependencies surfaced mid-sprint as surprises: one team waiting on another's API, timelines colliding, nobody aware until it was too late. The fix was structural: each team still planned independently first, but then all teams reviewed plans together using a template that forced explicit dependency mapping: who needs what from whom, and by when. The first joint review was uncomfortable. Teams discovered collisions they'd been ignoring. But that discomfort was the point. It meant we were finding problems in planning, not in production.

How checkpoints work at scale

With many teams, the CPO can't attend every checkpoint. The answer depends on organization size.

Up to ~6 teams (CPO direct)

CPO attends key checkpoints, coaches PMs directly. Possible but demanding. Works when you have product ops to handle process and tooling.

Beyond 6 teams (product lead layer)

Product leads own checkpoints for their teams. CPO coaches product leads. Async briefing replaces attendance. CPO attends by exception.

The CPO's job shifts: from making product decisions to building the system that makes good product decisions.

A note on product ops: A product operations manager can handle process and tooling, freeing the CPO to focus on strategic context and coaching. This is a different split than adding a product lead layer: product ops runs the machinery, not the product work.

Anti-patterns at scale

Common ways scaling goes wrong.

CPO as super-PM

Attending every checkpoint, making every call, teams waiting for permission. You've become the bottleneck you were trying to remove.

Shared roadmap theater

Quarterly review where every team presents, nobody changes anything. Alignment without consequence is just a meeting.

Platform team without users

Building what they think is useful, no discovery with internal customers. A platform team should treat stream-aligned teams as their users.

Topology on paper only

Team boundaries that look clean in a diagram but don't match real domain coupling. The org chart says autonomous; the code says otherwise.

AI-Powered Tools

Three tools that operationalize the playbook

These tools apply the frameworks above to your real work. Powered by AI, grounded in the playbook's principles.

Assumption Mapper

Describe an initiative, get assumptions mapped across the four product risks.

Discovery Debrief

Paste interview notes, get a structured synthesis with opportunity statements.

Product Coach

Ask anything about applying these frameworks to your challenges. Look for the ✦ Coach button.

Assumption Mapper

Describe a product initiative and get your assumptions mapped across the four product risks, with criticality ratings and suggested experiments.

AI-Powered Tools

Discovery Debrief

Paste raw interview notes and get a structured synthesis: Jobs-to-be-Done, struggle moments, opportunity statements, and suggested placement in your Opportunity Solution Tree.

Product Coach
Grounded in 12 principles & the operating model
Product Coach

Hi! I'm your AI Product Coach, trained on this operating model's 12 principles, seven-phase process, four risks framework, and continuous discovery methodology.

Ask me anything about applying these frameworks to your real product challenges.