When any LLM can solve your coding problem in five seconds, interviewing for “the right answer” is interviewing for the past. What we need now are engineers who can create clarity, make trade-offs, and own outcomes—not just write code on command.
I learned this the hard way.
I’ve run teams where execution interviews produced fast coders who stalled the moment the spec was fuzzy.
Momentum looked high. Progress wasn’t.
The problem wasn’t talent. It was how we filtered for it. We were testing fluency, not judgment. And judgment is the differentiator in the AI era.
Interviewing devs has to shift
Execution interviews are designed for stable problems. But, product work is not stable.
The dev’s real job is navigating ambiguity, balancing trade-offs, collaborating across disciplines, and staying anchored to user outcomes. You won’t see that in a timed coding puzzle. You will see it in how someone frames a messy problem and asks better questions.
As building gets cheaper and faster, the bottleneck shifts from “Can we code it?” to “What should we build and why?” Interviewing has to shift with that.
What to test instead
Knowing is cheap. Choosing is expensive.
It’s silly to quiz candidates on trivia they’d Google—or ask AI—to answer. What matters now is whether they can architect a path through uncertainty, identify risks, and define what success looks like for the user.
If you want product thinkers, stop testing for memorized solutions and start testing for how someone reasons when the answer isn’t obvious.
The Product-Thinking Interview Approach
At Full Scale, we use this step-by-step pattern in one 60–75 minute session.
You can do it without slides or a whiteboard trick. You just need a realistic, fuzzy scenario and a rubric that rewards signal over slickness (and you can steal ours).
0) Set the scene (2 min)
Give a short, imperfect brief:
A lot of first-time users bounce on step 2 of signup. We don’t know why. You’ve got one sprint to improve activation.
This mirrors reality: partial data, competing priorities, unclear constraints.
The point is to watch how they create clarity. Ambiguity compounds and kills momentum. Great candidates reduce it fast.
1) Questions before answers (8–10 min)
Invite the candidate to ask anything.
They get points for: user empathy, metrics curiosity, success definition, edge cases that matter.
They get dinged for: jumping to architecture, optimizing before understanding, asking for perfect specs.
2) Define success & constraints (5 min)
Ask:
What outcome would prove we made the right trade-offs?
Look for crisp, outcome-first language, not feature lists. Great candidates propose a measurable bet and call out what they’ll not do to protect it.
3) Make it messier (10 min)
Introduce a constraint:
Design has limited capacity.
Or
Legal requires email verification.
See how they rebalance scope, sequence, and risk. Product thinkers narrate the trade-off and its user impact.
4) Propose a first slice (10 min)
Ask for a scrappy, testable plan:
What’s your smallest shippable to learn fast?
Look for simplification, not heroics. Bonus if they design an instrumentation plan to validate the bet.
5) Collaborators & communication (5 min)
Ask:
Who do you pull in and when?
You want cross-functional awareness and the ability to narrate intent so decisions propagate. Speed of vision beats speed of code.
6) Risks & safeguards (5 min)
Ask:
What could backfire for the user?
Strong answers surface user harm, adoption friction, and platform risks, and propose mitigations. Leaders make the caution explicit.
7) Reflection (3 min)
Ask:
What did you learn as we talked?
You’re testing coachability and metacognition. Can they adjust their plan as reality shifts?
The scorecard
Rate 1–5 on each axis:
Curiosity & Questions: depth, sequencing, signal-seeking
User Orientation: talks about impact, not only implementation
Trade-off Clarity: names what’s in, what’s out, and why
Outcome Definition: proposes a measurable bet, not outputs
Systems Thinking: navigates constraints without thrash
Collaboration: knows who to involve and how decisions spread
Judgment Under Ambiguity: simplifies the path forward
Ownership Signals: says “I’d own X,” not “I’d wait for a ticket”
Anti-signals: speed-running a solution, arguing frameworks instead of outcomes, fear of saying “it depends,” or asking zero questions before proposing work.
Questions worth stealing
Use a few of these, then be quiet and listen.
“What would you change about a product you use every day—and why?” You’ll hear how they notice friction, frame the user, and reason about impact.
“What won’t you do in the first sprint, and why?” Forces visible trade-offs.
“If you were the PM, would we even build this?” Permission to think upstream.
“Where could this backfire for the user?” Makes caution explicit and builds judgment.
“How will you know you’re wrong quickly?” Tests learning speed over certainty.
How to adapt your loop for AI
Your goal isn’t to “catch” tool use. It’s to reveal thinking.
Don’t penalize Google/AI for facts. Penalize shallow reasoning. It’s silly to quiz someone on what they’d look up on the job anyway.
Reward transparency. If they use a tool, ask what prompts they’d try and how they’d validate the output. You’re testing what questions they ask, not rote recall.
Keep the scenario anchored to users and trade-offs. That’s where product thinking shows up.
Calibrate the bar with culture
Hiring for product thinking without building for it is a fast path to churn. If you only reward speed, you’ll interview for judgment and then smother it. Create an environment that teaches, supports, and requires thinking: narrate trade-offs, assign real ownership, and celebrate people who protect outcomes—not just output.
Start small:
Change one interview question this week. Swap trivia for judgment.
Give a stretch problem to an engineer and back them while they learn.
Say out loud what you’re not doing—and why. Make boundaries visible.
These tiny moves send a loud signal about what matters on your team. And people copy what gets rewarded.
The bottom line
The job has changed. Your interviews should, too.
Hire for how they think—especially when the answer isn’t obvious.
Then build an environment where that thinking survives.
If you want to go deeper on what this looks like in the age of AI, I unpack more in this week’s conversation on Product Driven: Why We Still Need Software Engineers in the Age of AI with Brian Jenney. Tune in for the full discussion on how to hire for judgment—and grow it once they’re on your team.