When did we stop arguing about the car?
A case for AI realism
Nobody argues about whether cars are good or bad. You learn to drive. You check the mirrors. You stay aware of what's around you and you get on with it.
When it comes to AI, we have multiple arguments. On one side, you've got the evangelists. AI is going to cure cancer, end poverty, and make your Monday morning effortless. On the other, the catastrophists. It's going to take your job, rot your brain, and eventually decide humans are inefficient. Both camps are loud. Both are certain. And neither is particularly useful.
AI realism is something different. And I think it's the only position worth holding right now. I also know it's not the most popular one. I'm okay with that.
Realism isn't cynicism
It's not "AI is overhyped" dressed up in nicer language. A good driver doesn't romanticise the engine or fear the road. They understand the vehicle, read the conditions, and make decent decisions in real time.
Because here's what I actually see, working in this space day to day: AI does some things remarkably well. It drafts, summarises, spots patterns, speeds up the boring middle of complex work. In some cases it is faster than us, more precise than us, and across certain domains it knows more than us. That's not a threat. That's just the reality of what we've built. It's genuinely useful and many times, startlingly so.
It also fails confidently. It flattens nuance. It reflects back what you already think if you're not careful. It inherits the biases baked into the patterns it learned from. These aren't reasons to throw it out. They're reasons to know what you're working with.
We've been here before
Hypes have patterns. AI had its hypes but it also had its winters. There have been three. Each one was preceded by enthusiasm that outran capability by years. Enormous excitement, breathless headlines, money pouring in. Then reality hits, trust collapses, and the field goes quiet. We are currently at the highest point on that curve in history and arguably at the highest point of technological capability we have ever reached. We are doing things today that we would have never imagined even just a decade ago. That's not a reason to panic. It's a reason to stay clear-headed.
So what does AI realism actually look like?
Three questions.
What is this tool actually doing here? Not what the pitch deck says. Not what the LinkedIn post promised. What is it literally doing in this specific context?
What am I keeping for myself? Judgment. Accountability. The decisions that carry real weight. Those stay with you. Not because AI can't approximate them, it can, sometimes uncomfortably well, but because the responsibility has to live somewhere human.
Who benefits and who doesn't? Every technology creates winners and losers. The car gave freedom to some and took livelihoods from others. AI is no different. Realism means asking that question out loud and not being afraid of the answer. Because the answer, even for those who don't benefit today, doesn't have to be a dead end. It can be the starting point for something better.
There is a commercial reality worth naming here. AI consultancies and enablers, the organisations actually building things, are more likely to work with clients who believe AI can solve everything than with those who question everything. And honestly? That makes sense. Ambition is where progress starts. But that's where the realist earns their place. They direct that ambition into a business case. They understand the ROI of AI efforts before any money is spent, not to curb enthusiasm, but to understand feasibility and set honest expectations. Without that, ambition is just enthusiasm with a budget. It's part of the reason I chose to study AI for business, not to add to the hype, but to be more effective at directing it. To bridge the gap between what AI can do and what a business actually needs.
In conclusion
Here's the thing about cars. When they changed the world, we didn't argue forever. We built roads, wrote rules, and learned to drive. Nobody waited for consensus. They made a call and dealt with what came. Some days they got it wrong. They adapted.
That's all AI realism is asking of you. Not expertise. Not certainty. Just enough clarity to make a decent call and enough honesty to update when you're wrong. I know this isn't the take that goes viral. But I'd rather be useful than loud.
You don't have to love AI. You don't have to fear it. You just have to learn to drive.
Tought pondered by Sarah exploring the intersection of AI, creativity, and human wellbeing