I listened to Lenny’s podcast the other day. His guest was Miggie Crowley, VP of Product at TripAdvisor. One thing she said really stuck with me: don’t just trust your data. Talk to your users.
Lately, I’ve been feeling a bit drained. We spend hours every day talking to AI, yet somehow feel more and more reluctant to talk to real humans. Data can tell you what is happening, and AI can help you think faster, but neither can replace the energy, nuance, and grounding that comes from an actual conversation with another person.
Really enjoyed this piece. It captures a shift I have been feeling in my own work.
One thing I would add is that discovery with AI feels less about finding the right feature and more about understanding what you are asking the system to do on the user’s behalf. You are not just building a tool anymore. You are shaping decisions, interpretations, and follow through.
That changes discovery in a few subtle ways:
You start focusing on decisions instead of flows. You spend more time exploring edge cases and “what could go wrong,” not just ideal outcomes. And you pay closer attention to trust. When do users feel helped, when do they feel confused, and when do they stop relying on the system altogether?
Teams that get this right are not skipping discovery. They are just doing a deeper, more responsibility aware version of it.
This breakdown of how AI shifts the discovery equation is super sharp. The insight that desirability becomes the bottleneck when building gets commoditized really plays out in practice - I've seen teams ship fast prototypes that nobody wants becasue they skiped the human conversation part. The "Should this exist?" framing over "Can we build this?" is kinda the whole gameshift right there.
"continuous discovery habits" was required reading from my boss at my last gig -- this is a great bridge from that idea into modern context with useful tactics 🖤 nicely done
I listened to Lenny’s podcast the other day. His guest was Miggie Crowley, VP of Product at TripAdvisor. One thing she said really stuck with me: don’t just trust your data. Talk to your users.
Lately, I’ve been feeling a bit drained. We spend hours every day talking to AI, yet somehow feel more and more reluctant to talk to real humans. Data can tell you what is happening, and AI can help you think faster, but neither can replace the energy, nuance, and grounding that comes from an actual conversation with another person.
Really enjoyed this piece. It captures a shift I have been feeling in my own work.
One thing I would add is that discovery with AI feels less about finding the right feature and more about understanding what you are asking the system to do on the user’s behalf. You are not just building a tool anymore. You are shaping decisions, interpretations, and follow through.
That changes discovery in a few subtle ways:
You start focusing on decisions instead of flows. You spend more time exploring edge cases and “what could go wrong,” not just ideal outcomes. And you pay closer attention to trust. When do users feel helped, when do they feel confused, and when do they stop relying on the system altogether?
Teams that get this right are not skipping discovery. They are just doing a deeper, more responsibility aware version of it.
Thanks for putting words to this shift.
This breakdown of how AI shifts the discovery equation is super sharp. The insight that desirability becomes the bottleneck when building gets commoditized really plays out in practice - I've seen teams ship fast prototypes that nobody wants becasue they skiped the human conversation part. The "Should this exist?" framing over "Can we build this?" is kinda the whole gameshift right there.
"continuous discovery habits" was required reading from my boss at my last gig -- this is a great bridge from that idea into modern context with useful tactics 🖤 nicely done