The Real Cost of Getting AI Wrong
I am not being alarmist. I am being precise
The gap growing between us is not between people who use AI and people who don’t. This gap closes fast. The dangerous gap is between people who understand what they’re using and people who believe using it is the same as understanding it.
One group will build. The other will be managed by what was built.
Institutions that treat AI literacy as a software rollout will produce graduates who are efficient operators of tools they cannot interrogate, cannot challenge, and cannot improve. That is not a workforce. That is an upgraded assembly line.
We didn’t create this problem. We inherited it. Elon Musk said it plainly: “Everyone goes through from 5th grade to 6th grade to 7th grade like it’s an assembly line. But people are not objects on an assembly line.”
He’s right. The model was engineered for a factory economy. Standardised inputs. Predictable outputs. Grade the batch. Ship the batch. Repeat.
Now that economy is gone. The assembly line is also gone.
But here’s what nobody wants to say out loud: we didn’t dismantle the assembly line. We simply digitised it.
The Illusion of Progress
For the schools announcing AI literacy is now mandatory for all students. Free AI tools for everyone. Headline worthy. Board-meeting-ready. Meant well.
And almost entirely missing the point.
Procurement is not education. Distributing tools is not teaching people how to think. What most institutions rushing to ‘do AI’ have done is install new machinery on the same factory floor. The conveyor belt still runs. The students still stand in line. The only difference is the machine next to them is now thinking for them.
We’ve seen this story before. The web was supposed to democratise knowledge. Social media was supposed to give everyone a voice. Cloud software was supposed to level the playing field. Twenty-five years into the digital revolution, we are still watching the same people get left behind, just with faster internet.
The Students Are Already Ahead — And Deeply Confused
Here’s the uncomfortable truth that no curriculum committee wants to admit: students already use AI more fluently than their teachers. They’ve found the shortcuts. They’ve stress-tested the outputs. They know which prompts work.
What they’re experiencing now is a genuinely strange cognitive dissonance. Learn from the machine. Infinitely patient, always available, never judgmental. Then get graded by a human who is overworked, inconsistent, and operating on rubrics designed for a pre-AI world.
The grievance is real. The students aren’t being dramatic. They are navigating two fundamentally incompatible ways of learning simultaneously, and nobody in the institution has acknowledged the contradiction, let alone resolved it.
You cannot build a new model of learning on top of an old model of assessment.
The foundation will crack.
I’ve seen the crack. I’ve watched it widen in real time.
Those confused students grow up. They enter the workforce. Some of them show up at developer meetups, building with AI prompts, shipping prototypes at speed, calling it innovation. The energy is infectious. The confidence is real.
So is the gap.
Ask them about the decision behind a data structure. Blank. Ask them what happens when the logic breaks at scale. Uncertain. Ask them how do they debug. They confidently said “I haven’t been reading code for a long time already.”
This is not a generation that was failed by laziness. They were failed by a system that gave them tools and called it education. They were taught to get answers. Nobody taught them to question the answer. Nobody taught them that the quality of the output is only as good as the thinking that preceded the prompt.
The vibe coders aren’t the problem. They are the consequence.
The consequence of outsourcing critical thinking before it was ever properly taught. The consequence of measuring students on outputs in a world where outputs are now infinite and free.
Fluent. Fast. And operating without a foundation.
That is what the assembly line produces when it gets a software update. Mistaken for an upgrade.
What Were We Trying to Build?
Teachers teach. Learners learn. That was the premise. AI was supposed to come in and make both better. Sharper teaching, deeper learning, greater outcomes for everyone in the room.
What I’m watching instead is something quieter and more troubling. Teachers becoming administrators of tools they don’t understand. Students becoming operators of answers they didn’t earn. The relationship between the two, that fundamentally human transaction of knowledge passing from one mind to another, slowly hollowed out by the machinery we installed to improve it.
I don’t have a clean answer to this. I’m not sure anyone does yet.
What I know is this: we are at a moment where the question still matters. Where we can still ask whether we are using AI to enhance what it means to learn. Or using it to replace the discomfort that learning requires.
Because that discomfort, the struggle, the confusion, the moment before understanding arrives, is not a problem to be optimised away. It is the learning. It is where critical thinking is forged. It is what no tool, however powerful, can manufacture on your behalf.
We built tools to help people think better. Instead, we built a generation that stopped thinking altogether.


