AI as a Discovery Engine: Using Language Models to Accelerate Assumption Testing
Every team member now has access to a tool that can generate dozens of feature ideas in seconds. A product manager prompts ChatGPT with a product description and a user problem and receives twelve well-articulated feature concepts. A designer asks Claude to brainstorm interaction patterns and receives twenty annotated approaches.
The AI Idea Flood: How Agile Teams Stay Outcome-Focused When Everyone Has a Chatbot
Product discovery has always had a time problem. The research activities that produce the most reliable insights — user interviews, prototype testing, behavioral analysis — are time-consuming. A typical discovery sprint that includes recruiting, interviewing, synthesis, and decision-making takes two to three weeks from first question to actionable finding. In a two-week sprint cycle, discovery that takes three weeks is discovery that always arrives a sprint late
The Infinite Machine Problem: When AI Can Ship Everything, How Do You Decide What's Worth Building?
For most of product development's history, the binding constraint was production. Building software was expensive, slow, and required specialized skill. The cost of production forced prioritization: you could not build everything, so you had to decide what was worth building. That constraint was uncomfortable, but it was also useful.
Synthetic Users: How to Run AI-Simulated Customer Interviews (and When Not To)
The promise of AI-simulated customer research is seductive: instead of spending two weeks recruiting, scheduling, and interviewing twelve users, ask an AI to respond as each of them would. Instant feedback. Unlimited iterations. Zero scheduling overhead.
Why Lean UX Is More Valuable in an AI World, Not Less
Every major technology transition produces a version of the same organizational mistake: companies invest heavily in the new capability and assume that the new capability will solve the problems that preceded it. The internet was going to make marketing so efficient that wasteful campaigns would self-select out.
When AI Writes the Code, Humans Must Still Define the Problem
The engineering profession is undergoing a transition that makes some engineers anxious and most leadership teams uncertain about what engineering looks like in two years. AI code generation tools — copilots, agent-based development systems, natural language-to-code pipelines — are changing the economics of implementation fast enough that the assumption of stable engineering work, which has held for forty years, is no longer reliable.
AI-Powered Continuous Research: Synthesizing Customer Signals at Scale
Every product generates far more customer signal than any product team has historically been able to process. Support tickets describe specific failure modes. App store reviews articulate user expectations and disappointments. NPS survey comments explain why users are recommending or warning against the product.
The Personalization Trap: Why More AI Data Doesn't Automatically Produce Better Products
Personalization is one of the most seductive promises in AI-augmented product design. The pitch is compelling: instead of designing for an average user who does not actually exist, AI can adapt the product to each individual user's behavior, preferences, and context in real time.
Building the AI-Native Product Discipline: Discovery, Outcomes, and Iteration in the Age of Generative Tools
Every product team right now is having some version of the same conversation: how do we become AI-native? The conversation is often framed around tools — which AI assistants to adopt, which code generation systems to deploy, which research automation platforms to subscribe to.
'Fat Marker' Sketches: Why Low Fidelity Wins in Agile Teams
There is a paradox at the heart of professional design tools. The more polished and refined a design looks — the more it resembles finished software — the less useful it is for getting honest feedback and generating genuine collaboration. Polished mockups communicate finality.
Hiring for Outcome Mindset: What to Look for in Product Managers Who Think in Behaviors
The most consequential talent decision a CPO makes is not which VP of Product to hire. It is the accumulation of individual product manager hiring decisions that shapes the team's collective capability. A team of product managers who think in outputs — who measure success by features shipped and roadmap items completed — will produce an output-optimized product organization regardless of how clearly the CPO articulates an outcome-based strategy.
Facilitating the Lean UX Canvas: A Workshop Guide for Agile Coaches
The Lean UX Canvas, created by Jeff Gothelf, is a structured one-page framework for aligning product teams around the business problem, user outcomes, assumptions, and experiments that define an initiative before any design or development work begins.
The 'Feature Fake': Testing Demand Without Wasting Engineering Time
Building software is expensive. A single feature — from design through QA to deployment — can consume weeks of cross-functional team time. And yet, the most common failure mode in product development is not bad execution.
Fixing Broken Standups: How to Run a Daily Sync That Actually Surfaces Blockers
The daily standup is agile's most universally practiced ceremony and its most universally abused one. In its original form, it was a team-level coordination tool: a brief, standing meeting where developers shared what they worked on yesterday, what they plan to work on today, and — critically — what is blocking their progress.
Feature Flags as Learning Infrastructure: How Engineering Enables Lean Experimentation
Feature flags — also called feature toggles or feature switches — are a deployment pattern that separates code deployment from feature activation. Code that implements a new feature is deployed to production but kept inactive behind a flag; the flag is then turned on selectively for specific users, user segments, or percentage rollouts without any additional deployment.
The Portfolio View: How CPOs Balance Explore vs. Exploit Across Product Lines
The explore-exploit tradeoff is one of the foundational challenges in any adaptive system, from individual product teams to entire product portfolios. Explore too heavily and you generate learning without building on it; exploit too heavily and you optimize an existing capability past the point of relevance while missing the next wave of value creation.
Proto-Personas: How to Create User Alignments in Under an Hour
Traditional user personas are valuable tools when they are done well: research-grounded representations of real user segments, built from interview data, behavioral analytics, and observational research, that help teams make design and product decisions from a shared user model. They are also expensive, time-consuming, and frequently wrong in ways that are not apparent until late in the product development cycle.
Writing Better User Stories: Why You Need 'Hypothesis Statements' Instead
The 'As a user, I want [feature], so that [benefit]' format has been the default template for user stories in agile teams for over two decades. It has real virtues: it keeps stories focused on user needs rather than technical implementation, it creates a common language across design and engineering, and it makes stories small enough to fit within a sprint.
Instrumentation as a Feature: Why Measurement Must Be Built, Not Bolted On
In most product organizations, instrumentation is an afterthought. A feature is specified, designed, built, and shipped — and then someone realizes that there is no way to know whether it is working. A tracking request is filed, an analytics implementation is added in a subsequent sprint, and by the time the measurement infrastructure is in place, the feature has been live for three weeks and the baseline data needed to assess its impact is gone.
From Story Points to Outcomes: Coaching Teams to Measure What Matters
If you have been coaching agile teams for more than a few years, you have had this conversation: a team proudly reports that their velocity has climbed from 32 to 58 story points per sprint. Everyone in the room nods approvingly. But when you ask which user behaviors changed as a result of what the team shipped in those high-velocity sprints, the room goes quiet.