Thirteen convictions shaped by building products, leading teams, and learning, sometimes painfully, what actually works.
Not a universal framework. One product leader's operating philosophy. Adapt what's useful.
→ When the team doesn't agree on what's actually blocking growth, they can't make good prioritization decisions. Diagnosis is 80% of the work.
I led a complete market pivot. We diagnosed which market segment actually needed us and deliberately skipped the easiest growth path: scaling in a segment we'd already proven. The market window for larger customers set the pace, not our comfort level. The pivot succeeded in 11 months. It wouldn't have happened if we'd started with "grow 40%" instead of "why aren't we winning, and when does the window close?"
During a compressed delivery timeline, maintaining an explicit "what we won't do" list was the difference between shipping and spiraling. In another role, I chose one strategic customer segment over another, both valuable, both with strong business cases. "Both" wasn't realistic given our capacity. Saying that out loud, and choosing, unlocked the team.
→ Measure on outcomes, not output. Give the team the problem, not the solution.
The best product work I've seen happened when I gave the team a clear strategic diagnosis and an outcome to own, then stayed out of the solution. The worst happened when I gave vague direction and the team spent weeks guessing what leadership actually wanted. Strategic clarity isn't micromanagement; it's the precondition for autonomy.
→ Discovery is a trio discipline: PM, designer, and engineer. The designer often sees interaction patterns and hesitation that never get articulated in interviews.
Running seven design sprints on a single product area, speaking with 35 customers, I discovered something humbling: we weren't just wrong about specific answers; we were wrong about which questions mattered. Regular exposure doesn't just correct your answers. It corrects your questions.
→ Use JTBD method. Never ask about hypothetical future behavior, only past behavior.
We were about to retire a legacy information channel. The internal team was adamant: users depend on this. So we went to the locations. Observed actual behavior. Visited the target demographic directly. The physical artifacts sat unused in warehouses. Stated preferences and observed behavior told opposite stories. We retired the channel. Zero complaints.
At one company, customer success maintained a list of "blockers for adoption": what prevented customers from going all-in. Useful as a signal. Dangerous as a diagnosis. Our job in product was to look past the blocker and find the underlying problem. The list told us where to look. Never what to build.
Facing competing demands from two customer segments, one was loud and well-organized, but already reasonably well-served. The other was quieter but deeply underserved on capabilities that mattered to their daily work. We chose the underserved group. Zero churn on either side. We understood both segments well enough to know the deprioritized one could wait. The loudest customers aren't always the most underserved.
→ Most product teams can't answer a basic question: are we delivering more value than we cost? Seven highly paid people, no clear view on impact. If you can't connect your work to a customer outcome, you're guessing, and expensive guessing doesn't scale.
When we replaced a 30-year legacy system, the question was never "how many features have we shipped?" The question was: can this customer run their entire operation on our product, without keeping the old system as a parallel backup? That single criterion, operational viability, shaped every team's priorities for a year. Features that didn't move customers toward full adoption got deprioritized, regardless of how "ready" they were.
→ The internal roadmap is a learning tool. The external roadmap is a trust instrument. Build trust by committing to less and delivering it consistently, not by presenting an ambitious timeline you'll quietly revise later.
I once presented a roadmap with fuzzy dates to a key customer. The room went cold. They needed to plan their own implementation, train their staff, budget for the transition. "Directional" meant nothing to them; they heard "we don't know." That taught me: the product team's uncertainty is real, but it's yours to manage. The customer deserves a commitment you can keep, not a window into your internal planning process.
Earlier, I'd inherited a product where the backlog was a graveyard of sales promises: features committed to customers before engineering ever saw them. The roadmap had become a contract. The fix wasn't better estimation. It was changing who decides what gets built, and learning to say "this is what we're delivering this quarter" instead of either overpromising or hedging.
→ For every initiative added to the roadmap: what are we removing, and are we agreed this is more important?
At a 4-person startup running a platform across 5 Nordic markets, every customer had legitimate needs: different pricing models, different integration requirements. Positive ROI cases everywhere. But a team of four can't serve five markets equally. We had to choose which needs to build for now and which to defer, knowing that the format we used to gather needs (top-10 lists per customer) was itself creating false expectations. The real discipline wasn't saying no. It was redesigning how we collected input so that "no" didn't feel like betrayal.
I watched someone follow our PRD template meticulously: every section completed, every field filled. The document was flawless. The thinking was absent. That's when I understood: templates can become a substitute for the work they're supposed to capture.
OKRs are the same trap, and early in my career, I fell into it. Days in offsites debating objectives that became wishlists. Half abandoned by mid-quarter. The problem was never the OKR framework. It was setting objectives before doing the diagnosis. When you skip the strategic thinking and jump straight to "let's set an OKR for that," you're guessing with confidence. The teams that used OKR-writing as a way to surface weak strategic thinking got sharp goals in hours. The teams that used it as a planning ritual got polished documents and no direction.
But this doesn't mean strategy needs to be slow. I treat strategy like a product: ship an MVP, a one-pager on day one, then iterate. An 80% strategy you can test and strengthen beats a perfect document that arrives three months late. The writing surfaces where your thinking is weak. Stakeholders can react to something concrete instead of debating abstractions. Build, measure, learn, applied to strategy itself.
We pivoted into a new market and delivered a complete platform replacement in 11 months. The whole company understood why speed mattered. A new commercial leader made it brutally clear: time to market isn't a metric. It's survival. That understanding changed everything upstream. Problem definitions were written before solutions were discussed. Strategic context was shared company-wide. Every team knew exactly what they were solving and why it mattered now. Product idealism quietly stepped aside. The market window wouldn't wait. The product organization exists to deliver on business goals. That lesson changed how I prioritize permanently.
→ Test regularly: can team members explain why we prioritize what we do? Can they say no to a stakeholder request with a good reason? If not, the strategy work isn't done.
The clearest test I know: sit in on a stakeholder meeting where the team is asked to take on something off-strategy. If they can explain why it doesn't fit, without checking with you first, the strategic clarity is real. If they hesitate, defer, or say yes to avoid conflict, the strategy only lives in your head. I've seen both. The difference is never the team's capability. It's whether the leader did the work of making the strategy shared.
→ Never ask "did we fill out the template?" Ask "do we understand enough to make a good decision?" Process is scaffolding, not the goal.
I never insisted on perfect PRDs. Sometimes we didn't write one at all. The templates and processes I put in place were training wheels: scaffolding for PMs and teams who needed structure to develop their thinking, not a compliance checklist. The best teams outgrew the templates quickly. They still used the agreed formats, but loosely — one PM basically ran discovery documentation through Slack canvases, and it worked because the thinking was sharp. The teams that struggled weren't the ones who skipped the template. They were the ones who filled it out without thinking.
Understanding customer problems and testing potential solutions before committing to build. Runs parallel to Delivery, always. Answers: Are we building the right thing?
Building and delivering validated solutions with high quality and predictable pace. Answers: Are we building it right?
A customer need, problem, or desire that, if addressed, would move a meaningful outcome. Framed as a problem, not a solution. "Users struggle to understand their own spending" is an opportunity. "Add a spending dashboard" is a solution.
A measurable change in customer behavior or business results. Not a feature or deliverable. "30-day retention increases from 40% to 55%" is an outcome. "Launch onboarding wizard" is output.
A new understanding that changes what we believe we should build. Not a data point, but a conclusion. "Users churn" is not an insight. "Users who don't complete the core workflow in their first session have 4x higher churn" is an insight.
The specific, concrete episode where a customer experienced a problem with the current solution. The unit of insight in discovery. Named after a real person, in a real context, with real friction.
A structure linking outcome → opportunities → solutions → experiments. Keeps Discovery connected to strategy. Prevents insights from remaining loose anecdotes.
How much time the team is willing to spend on a problem, not how long it will take. Constrains scope rather than growing with it. Ryan Singer's concept from Shape Up.
| Avoid | Use instead |
|---|---|
| "MVP" | "Experiment" or "smallest bet that tests this assumption" |
| "Requirements" | "Opportunity" or "problem definition" |
| "We know our customers" | "We talked to N customers in the last 30 days" |
| "Best practice" | "What worked at [company] in [context]" |
| "Agile" | Name the specific practice: sprint, Shape Up, kanban |
Product work isn't a collection of independent decisions. Each step shapes what's possible in the next. This is the sequence I follow, and the one this playbook is built around.
What's actually blocking progress? Not goals, not aspirations. The specific obstacle. This diagnosis shapes every decision downstream.
Does this delight customers? Is it hard to copy? Does it improve the business? If it doesn't pass, it doesn't compete for attention, no matter how many people ask for it.
Talk to customers. Understand what they actually do, not what they say they want. Map opportunities to outcomes. The struggling moment is where insight lives.
Score what you've validated. Sequence in three horizons: committed, directional, exploratory. Protect the high-leverage work from the tyranny of what's urgent.
Write the problem statement first. If you can't articulate why this matters, you're not ready. Map assumptions. Test the riskiest one. Kill what doesn't survive.
Set how much time this is worth. Shape the scope to fit. If it doesn't ship in the budget, stop, don't extend. That's a signal, not a failure.
After launch: go back to step 1. Does the portfolio build durable advantage, or are we shipping things anyone can copy?
Four risks to manage and a strategy kernel to align on. Every product decision passes through this lens.
Four Product Risks · Strategy Kernel · Prioritization · Anti-patterns
All product decisions are risk management decisions. These four risks must be addressed early, not after something is built. Every checkpoint is fundamentally a check on the status of these four risks.
Diagnose the critical challenge the company faces and formulate a coherent strategic direction, before teams set OKRs. OKRs flow FROM the diagnosis, not the other way around.
These feel like strategic work. They're not. I've fallen into most of these, the first one especially.
| Anti-pattern | What it looks like | Fix |
|---|---|---|
| Goals as strategy | "Grow 40% this year" with nothing about how or why not | Diagnosis first: what's actually blocking growth? |
| Vision without strategy | Mission deck, values wall, no choices made | Skip the vision deck. Start with the diagnosis. |
| Procrastination as busyness | "I don't have time for strategy," calendar full of status meetings | Fear of formulating a bad one. Block 2 days per quarter. |
| ROI-only prioritization | "Is this worth doing?" and the answer is always yes | "Is this the best use of our time?" Opportunity cost frame. |
Prioritization isn't a scoring exercise; it's a series of filters. Each layer removes options that shouldn't compete for attention.
Does this opportunity align with the strategic diagnosis? If the guiding policy says "focus on segment X," a feature that only serves segment Y doesn't pass, regardless of how many customers ask for it.
If an opportunity doesn't fit any category, or only fits "internal hygiene," question whether it belongs in this cycle.
Before scoring solutions, assess the problem. Two dimensions matter:
How important is this job-to-be-done for users, and how well is it served today? High importance + low satisfaction = real opportunity. Low importance or high satisfaction = noise, regardless of volume.
Kano categories shift over time. Today's delighter is next year's must-have. Reassess each cycle.
Once you've identified real opportunities, score the proposed solutions. ICE is simple enough to be useful, transparent enough to be challenged.
How much will this move the key result? Grounded in evidence, not hope. Before you score, write one sentence: what specific difference does this make for a real user? "Operations staff complete the core workflow in 2 clicks instead of 7" is impact. "Impact: 8" is not. If you can't describe it in plain language, you don't understand it well enough to score it.
How strong is our evidence? Validated with users = high. Team assumption = low.
How much does it cost to build and ship? Include design, dev, QA, rollout.
ICE = (Impact × Confidence) / Effort. The score is a conversation starter, not a decision. If the top-scoring item doesn't feel right, interrogate the scores, don't ignore the feeling.
Prioritization is incomplete without saying what you choose not to do. Every planning cycle, write down 2–3 things the team will explicitly not pursue, even if they're important. This protects focus and gives the team permission to say no.
Don't score what you haven't validated. Track where each opportunity sits:
Scoring an "idea" with ICE produces fiction. Move it to "validated problem" first, then score.
Early on, we tried scoring everything in a spreadsheet, impact, effort, reach, confidence, for 80+ items. Nobody trusted the scores. They were gut feel dressed up as math. What worked: first filtering by strategic fit (does this serve the pivot?), then assessing the problem (is this important and underserved?), and only then scoring the remaining solutions. The filtering removed 60% of items before we ever scored anything. The conversations got better because we were comparing 8 opportunities, not 80.
80 items in a scoring spreadsheet where every item is 7/10. If the scores don't create clear separation, the framework isn't helping; it's hiding the hard conversation.
Prioritizing based on who complains most. The loudest customers aren't the most underserved; they're the most organized. Importance × satisfaction reveals what volume doesn't.
Impact 8, Confidence 7. Based on what? If the confidence score isn't tied to actual evidence (user interviews, data, experiments), ICE becomes a consensus exercise, not a prioritization tool.
A priority list without explicit non-goals is a wish list. If you can't name what you're choosing not to do, you haven't actually prioritized.
Discovery is a weekly habit, like exercise. Quarterly research sprints are too slow. By the time insights are synthesized, the team has already built something based on assumptions.
Triage · Dual-track · Discovery rhythm · Opportunity Solution Tree · Anti-patterns
Not all work needs discovery. The first question is: Is the problem and solution already known?
All three must be true:
Requirements
One or more of these are true:
Follow the six phases in the Toolkit with three checkpoints. No initiative moves forward without explicit go/no-go.
The goal isn't a rigid weekly schedule. It's that the trio always has fresh customer insight to work from. Not stale research. Not assumptions. Recent contact with real users.
At least one customer interaction per week is the ambition. The trio (PM, designer, engineer) shares the exposure. In B2B, access often runs through CS and Sales. What matters is that someone on the team talked to a real user recently.
10 minutes after every call. What surprised me? What quote do I want to remember? Was my hypothesis confirmed, challenged, or more complex than I thought? What changes in the opportunity tree? Insight decays fast. Capture it while it's sharp.
One key learning shared with the team each week: one paragraph, one insight. Update the OST. Note which assumptions were confirmed or challenged. This keeps discovery visible and accountable without ceremony.
The structure that keeps Discovery connected to strategy. Every customer insight lands here. Prevents findings from remaining loose anecdotes.
Each opportunity generates assumptions. The riskiest assumption gets tested first. The tree only works if it's visible to the trio and reflects what the team currently believes, not what they believed last quarter.
Always test 2+ solutions per opportunity. Testing one solution invites confirmation bias. Comparing two forces you to understand which works better and why.
The tree lives in the PM's personal notebook. The trio has never seen it. Discovery insights go in, nothing comes out. A filing cabinet, not a thinking tool.
These are the ways discovery goes wrong without anyone noticing. Each one feels like progress but produces no real insight. I've seen every one of these in practice, some more than once.
| Anti-pattern | What it looks like | Fix |
|---|---|---|
| Validation theater | "Run a quick study to validate," at the end, after the decision is made | Falsify, don't validate. Design research to prove yourself wrong. |
| Middle-range research | Interesting findings that don't change any decision | Go macro (strategic) or micro (usability). Kill the middle. |
| Opinion interviewing | "Would you use this?" "Do you like this feature?" | Only past behavior counts. "Walk me through what you did last time." |
| Shallow interviews | "Great, tell me another story," breadth without depth | Follow the thread. "What happened next?" "Why did you do it that way?" |
| Solution-as-opportunity | "Add a dashboard" appears as an opportunity in the OST | Reframe as unmet need: "Users can't see their progress." |
| Feature-request backlog | Customers said they want X, so it goes on the roadmap | Check: did anyone actually churn over this? Stated need ≠ real need. |
Process phases, operating rhythm, and templates. Use what's relevant, skip the rest.
6-phase process · Operating rhythm · Templates
Six phases from Objective to Launch. Strategy lives in Decisions. Route work via triage first; not everything needs all six phases.
Define clear, measurable ambitions for the team for the coming period, derived from the strategic kernel, so everyone knows what they're working toward and why.
Five rhythms that keep the model alive. Three are checkpoints. Nothing moves forward without an explicit decision. For people development, see People.
Formal checkpoints work best as training wheels, not guardrails. Mature teams do informal alignment naturally; less experienced teams use the structure as scaffolding until they don't need it. If you still need formal gates after a year, the problem isn't process. It's trust.
Rig the team for the coming period. Set OKRs. Clarify focus and non-focus.
Confirm the problem is well enough defined to begin designing solutions. The trio owns the decision. Stakeholders are invited for input, not approval.
Review the proposed solution with key stakeholders and confirm alignment, before engineering starts development. Errors found here cost the least to fix.
Ensure everything is in place for a controlled launch: product, documentation, communication, and the entire organization. Not a technical check, but a coordination check.
Evaluate the effect of what we launched, understand why results came out the way they did, and decide what to do next. Not a results presentation, but a learning session.
Deviating from goals is not failure; it's information. Failure is not learning from it.
Coaching is an operating rhythm: weekly 1:1s, monthly observation, quarterly reviews. See People for the full coaching model.
Templates support the playbook. They're not the playbook itself. Use the sections that are relevant and skip the rest.
Intentionally short (one page). Diagnosis of the critical challenge, guiding policy with explicit trade-offs, and 2–4 coherent actions. Input to OKR-setting.
OKRs derived from the strategic kernel, learning goals, focus, non-focus, and dependencies. Single source of truth for team direction.
Structured guide using JTBD method. Explore behavior, not preferences. Identify struggle moments. Use every week, not just at project start.
Take an opportunity from the OST and scope it for action. Connects the user insight to a business case, importance–satisfaction analysis, and a clear recommendation on whether to pursue.
Complete opportunity formulation: the checkpoint between discovery and PRD. Connects user evidence to business case with clear problem statement and success criteria.
Map and prioritize critical assumptions by risk level and evidence. Design experiments to test the riskiest ones first. Input to Solution Review.
Identify risks the team knows about but hesitates to raise. Run in the last 20 minutes of kickoff. Surfaces blockers before they become surprises.
Document Go/No-go decisions with risk assessment and conditions. Used at all three checkpoints. Same template, different context. Archive with the initiative.
Product Requirements Document. Write to communicate, not to document. Specific enough for engineers to challenge. The checkpoint artifact for both Kickoff (problem) and Solution Review (solution).
Document findings from user testing: patterns, quotes, and severity. Key input to Checkpoint 2: Solution Review. Keeps evidence structured and actionable.
Appetite, building blocks, and rabbit holes. Clarifies scope and trade-offs before build starts. Bridges the gap between validated solution and sprint planning.
Plan and align the team around sprint goals, focus areas, and acceptance criteria. Keeps scope decisions visible. Updated each sprint.
Pre-launch coordination document. Aligns Sales, CS, Support, and Marketing. GTM readiness checklist. The key artifact for Launch Readiness checkpoint.
Metrics vs. goals, root cause analysis, and learning notes. Closes the loop on every initiative. Run 2–4 weeks after launch in Impact Review.
Process without coaching is compliance, not product development. The model is half the job. Judgment is the other half, and judgment is developed through coaching.
What coaching is · Four judgment areas · Coaching rhythm · Maturity model · Anti-patterns
A PM who follows every gate, template, and ritual perfectly but never involves users, never scopes for MVP, is a project manager with a fancier title. The difference is judgment. And judgment isn't built by process alone.
I gave a vocal skeptic full ownership of building our operating model. He followed every process step perfectly, gates, templates, rituals, but when he stepped into a PM role temporarily, he never involved users, never scoped for MVP. The model is half the job. Coaching is the other half.
Key principle: coaching is about making the PM better, not making the product better this week. The product gets better as a consequence.
You don't coach on process compliance. You coach on judgment in four areas.
Can they separate symptoms from root causes? Do they prioritize based on evidence, not the loudest voice?
Weak sign: jumps to solutions before the problem is understood. "The customer said they want X" without asking why.
Do they understand users deeper than users understand themselves? Jobs-to-be-done. Struggle moments. Context.
Weak sign: quotes what users said without interpreting what they meant. Takes feature requests literally.
Do they evaluate against the four risks (value, usability, business, tech)? Scope for MVP? Kill their own bad ideas?
Weak sign: married to the first solution. No compare-and-contrast. Scope grows without anyone saying stop.
Can they explain why, not just what? Adapt the message for the trio, stakeholders, leadership, customers?
Weak sign: PRD nobody understands. Stakeholders get surprised. Engineering doesn't know why they're building it.
A PM on my team was drowning. First enterprise customer, a flood of feature requests channeled through Customer Success, and pressure to deliver on everything. He was trying to prioritize between twenty "critical" items, struggling to decide on the right solution for any of them. In our 1:1, I didn't help him prioritize the list. I told him to visit the customer. Go sit with the actual users. Watch them work. He came back with several eureka moments: half the requests from CS didn't match what users actually struggled with, and the problems they did have pointed to a fundamentally different solution than what was on the backlog. That one visit reshaped both the priorities and the approach. No framework did that. Direct user contact did.
Four cadences that keep people development alive.
Not a status update. The PM owns the agenda. The product lead listens, asks questions, gives feedback.
Q1 reveals decision quality. Q2 reveals self-awareness. Q3 reveals whether the discovery habit is alive.
All PMs together, across teams. One PM presents a real problem. The group challenges and discusses. No status updates, only craft discussions.
Sit in on a user interview, usability test, or stakeholder presentation. Don't speak during the session. Give feedback afterward.
Observation is the strongest coaching tool. A PM can talk about user insight in a 1:1 without actually doing good discovery; observation reveals reality.
Structured review of the PM's development. Not a performance review, but a development conversation.
Checkpoints are scaffolding for immature teams. Coaching is what makes teams eventually not need them.
Coach all four judgment areas. Observe frequently. Formal checkpoints with product lead present.
All four areas. Frequent observation.
Good problem understanding. Needs help with prioritization and communication.
Solution & communication judgment. Checkpoints owned by trio, product lead informed.
The trio makes good decisions independently. Challenge blind spots, don't coach.
Sparring, not coaching. Informal alignment only. No formal checkpoints.
Graduation criteria: last 3 initiatives had good problem definition without correction, the trio identifies risks proactively, stakeholders trust the team's decisions, and the PM can articulate what she doesn't know as well as what she does.
The hardest coaching moment I've faced wasn't a performance problem. It was a pivot. The organization was two weeks from launching in one market when the decision came to abandon it entirely and pivot to a different domain. People had built relationships with customers. Many felt the original mission was more meaningful. Economic arguments alone, even correct ones, weren't enough. I had to address meaning: reframe the new domain as genuinely underserved, connect people's sense of purpose to the new direction, not just the new business case. One person chose to leave, and I respected that. The rest moved from resistance to engagement. The lesson: when change threatens people's sense of purpose, coaching that ignores the emotional layer and focuses only on logic will fail.
Shared frames of reference, new perspectives. But limited transfer to daily work. Works as supplement, not foundation.
Inspiration, network, exposure to practice outside your bubble. Expensive and variable value. Pick 1–2 per year with clear purpose.
Can fill specific knowledge gaps (data, technical, design). But rarely what develops product sense.
I tried book clubs, conferences, frameworks, and certifications. Some added perspective. None built product sense. What actually worked: weekly 1:1s where we discussed real decisions on real problems, and a PM chapter where one PM presented a live challenge and the group tore it apart constructively. I also learned where coaching fails: when someone follows every process step but doesn't develop product judgment: involvement of users, MVP scoping, questioning assumptions. That gap becomes visible when the coaching stops. Process compliance without judgment is the failure mode, and it taught me that coaching must be continuous and tied to real decisions, not just offered as optional support.
Common ways coaching goes wrong.
If you spend 1:1s asking "what are you working on this week?", you're wasting both people's time. That's what a project tool is for.
If you only give feedback when something went wrong, the PM learns to avoid mistakes, not to make good decisions.
Sitting in on an interview without giving feedback afterward is wasted time for everyone.
If the PM experiences 1:1 as a place to "sell" her decisions to you, you've created a hidden checkpoint, not a coaching relationship.
The operating model describes how a single product trio works. A CPO runs an organization of teams. Everything changes: who attends checkpoints, how dependencies are managed, how planning works across boundaries.
Team topology · Dependencies · Multi-team planning
Team topology isn't an org chart exercise; it's a strategic decision. It belongs in Fase 00 alongside the strategic diagnosis.
Reference: Team Topologies (Skelton & Pais): stream-aligned, platform, enabling, complicated-subsystem.
At a B2B SaaS company, I worked closely with the CTO and CXO to design team topology. We used architecture diagrams and the core domain model to define 6 value stream teams covering the full customer journey. Each team got a clear mandate and problem area. It took a lot of time, and it's genuinely hard. The real lesson: you won't get the topology right on paper. It's only when the model is in use that you see where the challenges are.
Cross-team dependencies are the #1 scaling problem. They're not a coordination challenge; they're a signal that team boundaries might be wrong.
Anti-pattern: If you need a "dependency manager" role, you have an organizational design problem, not a coordination problem.
We ended up with teams that were strongly dependent on each other. One team owned the core product catalog; another needed that data to calculate correct pricing. That coupling only became visible once the teams started working, not when we drew the topology on a whiteboard.
Planning across multiple teams. Not a roadmap exercise, but a strategic alignment exercise. The cadence depends on context: quarterly, per tertial, or half-yearly.
Teams set their own OKRs, always in context of a strategic update at company/product level.
Each team plans for themselves first. PM leads, full team involved.
All teams review plans together. Tool: Teamoversikt template, covering strategy, OKRs, and explicit dependencies to and from other teams. Joint review surfaces dependencies early so they can be handled.
Drive the strategic context, facilitate joint review, ensure coherence, not approve plans.
Before we introduced joint reviews, each team planned in isolation. Dependencies surfaced mid-sprint as surprises: one team waiting on another's API, timelines colliding, nobody aware until it was too late. The fix was structural: each team still planned independently first, but then all teams reviewed plans together using a template that forced explicit dependency mapping: who needs what from whom, and by when. The first joint review was uncomfortable. Teams discovered collisions they'd been ignoring. But that discomfort was the point. It meant we were finding problems in planning, not in production.
With many teams, the CPO can't attend every checkpoint. The answer depends on organization size.
CPO attends key checkpoints, coaches PMs directly. Possible but demanding. Works when you have product ops to handle process and tooling.
Product leads own checkpoints for their teams. CPO coaches product leads. Async briefing replaces attendance. CPO attends by exception.
The CPO's job shifts: from making product decisions to building the system that makes good product decisions.
A note on product ops: A product operations manager can handle process and tooling, freeing the CPO to focus on strategic context and coaching. This is a different split than adding a product lead layer: product ops runs the machinery, not the product work.
Common ways scaling goes wrong.
Attending every checkpoint, making every call, teams waiting for permission. You've become the bottleneck you were trying to remove.
Quarterly review where every team presents, nobody changes anything. Alignment without consequence is just a meeting.
Building what they think is useful, no discovery with internal customers. A platform team should treat stream-aligned teams as their users.
Team boundaries that look clean in a diagram but don't match real domain coupling. The org chart says autonomous; the code says otherwise.
These tools apply the frameworks above to your real work. Powered by AI, grounded in the playbook's principles.
Describe an initiative, get assumptions mapped across the four product risks.
Paste interview notes, get a structured synthesis with opportunity statements.
Ask anything about applying these frameworks to your challenges. Look for the ✦ Coach button.
Describe a product initiative and get your assumptions mapped across the four product risks, with criticality ratings and suggested experiments.
Paste raw interview notes and get a structured synthesis: Jobs-to-be-Done, struggle moments, opportunity statements, and suggested placement in your Opportunity Solution Tree.