I very frequently get the question: ‘What’s going to change in the next 10 years?’ And that is a very interesting question; it’s a very common one. I almost never get the question: ‘What’s not going to change in the next 10 years?’ And I submit to you that that second question is actually the more important of the two — because you can build a business strategy around the things that are stable in time. — Jeff Bezos
There is important work happening inside most product teams right now.
Tracking every new AI model release, every capability jump coming from AI, every benchmark. GPT-5 dropped. Gemini got faster. Claude got cheaper. Cursor replaced half the engineering rituals. Agents are being wired into workflows that did not exist six months ago. There is a Slack or Teams channel for it. Someone owns the spreadsheet. This work matters. Staying current with what AI can do is real competitive leverage and ignoring it would be its own mistake.
But there is another set of questions that has not moved. Not in five years. Not in ten. Not since the first product team ever sat in a room trying to figure out what to build.
These are first principle questions. And first principles do not expire. They do not get disrupted by a new model release. They do not show up in a benchmark.
They just sit there, waiting for someone to take them seriously again.
For product people sitting inside AI-assisted (or even AI-native companies), that question lands with particular weight. Not: what can AI do this quarter? But: what will still be true about building products ten years from now, regardless of the tools?
The answer is not flattering. It is the work most teams quietly skip, and have been skipping long before the first model benchmark ever got shared in a Slack channel.
So here are the five things that have not changed. And will not for even the next decades:
1. Diagnosing the real customer problem
AI can generate solutions faster than any team in history. That speed makes one thing more valuable, not less: correctly naming the problem before you build.
Feed the wrong problem in. You get a faster wrong answer. The diagnostic work, sitting with customers, watching where they hesitate, understanding what they are actually trying to do versus what they say they are trying to do, that cannot be outsourced. It is slow. It is qualitative. It is the whole job.
Most teams treat problem definition as the thing you do before the real work starts. It is the real work.
2. The gap between what customers say and what they do
This is a human behaviour truth that predates software and will outlast AI.
Ask users what they want. They will tell you more features, more control, more options. Build it. Watch them ignore every single addition and default back to the simplest path available. Not because they were lying. But because people are poor predictors of their own behaviour, especially under pressure or cognitive load.
You cannot survey your way out of this gap. You have to watch people actually use the thing. In real conditions, not demos. The observation is irreplaceable, and no model trained on stated preferences will close it for you.
3. Whether the solution actually works
Shipping has never been faster. Validation has not changed speed at all.
A real human still has to try the thing, feel it, decide if it solved their problem. That loop, build, put in front of a real person, watch what happens, adjust, cannot be compressed beyond the speed of a human experiencing something. You can automate the build. You cannot automate the learning.
Teams that confuse shipping velocity with validation velocity are not moving faster. They are just accumulating wrong answers quicker.
4. Pricing and value fit
At the moment of purchase or renewal, a customer is making a perception call. Does what I get feel worth what I pay? This is psychological and contextual. AI can model elasticity curves and run pricing scenarios. It cannot feel the hesitation at €29 a month, or understand why a feature someone uses daily still does not feel worth the line item on a statement.
Value communication, packaging, the way a price sits in someone’s mind. That is a human problem. It will remain one.
5. Product-market fit signal reading
Retention curves flattening. Word of mouth starting to pull without you pushing. Churn dropping below a threshold. The Sean Ellis question, how would you feel if this product disappeared, crossing the number where it means something.
These signals exist in data. But reading them honestly, deciding whether early traction is real or just novelty, knowing whether the cohort that loves you is a market or just a cluster, that is judgment. It requires context AI does not have and intuition that comes from being inside the problem for long enough to know the difference.
The cycle that never stops
Here is what connects all five. They are not milestones. They are a loop.
You diagnose the problem. You watch real behaviour, not stated preference. You validate whether the solution works. You check if the value exchange holds. You read whether the market is actually there.
Then you do it again. Because the customer has moved. The market has shifted. Something you built six months ago is no longer solving the right thing.
Which means the two decisions that sit at the centre of this loop do not change either. What to build next. And what to kill from before.
Prioritisation is a bet on the future made with imperfect information. The decision to stop, to cut the feature, end the initiative, admit the thing did not work, requires judgment, honesty, and skin in the game. Three things AI does not have.
The teams who win the next ten years are not the ones who tracked AI the closest.
They are the ones who stayed embarrassingly obsessed with the basics.
That is the whole game.
