Where AI Actually Lives in the Real World
I sat through two back-to-back sharing sessions at an AI developer meetup recently. Both speakers were building production systems. Both were credible. Both told essentially the same story.
The reality I walked away with: in Singapore, developers in corporations are still coding deterministically. And the provocative truth is this: AI has yet to earn a central position in the production stack. That should make AI evangelists uncomfortable. It should also make everyone else pay closer attention.
The Room Got Quiet When Someone Said “Banking”
Two speakers. Two completely different use cases. One identical conclusion.
The first was building a credit assessment service for a bank. The second was building a customer service chatbot workflow. Different industries, different problems, different teams, and yet both made the same architectural decision independently: build it deterministically, and deploy AI only where it earns its place.
I asked the banking developer directly where the AI sat in his stack. He didn’t even hesitate. He had ripped out the vector database from his retrieval pipeline entirely.
Traditional database queries, he said, were faster and more accurate for his use case. No embeddings. No semantic search. Just structured queries doing what they’ve always done well. The LLM handled one specific component in the pipeline and nothing more.
The financial sector cannot accept a probabilistic answer. A credit decision either meets the criteria or it doesn’t. A compliance flag is either triggered or it isn’t. There is no “I’m 73% confident this applicant qualifies.” That’s not a feature. That’s a liability.
So neither developer built an AI system. They built rule-based systems with AI embedded at precisely the right point.
Domain Expertise Is the Real Architecture Decision
The developer didn’t remove the vector search on technical grounds alone. He removed it because he understood the domain.
Banking credit assessment operates on a maker checker model, a dual process control where one party prepares the decision and another independently validates it. This is not just a compliance formality. It is the institutional logic of how financial risk is governed. Every data retrieval in that pipeline needs to be exact, auditable, and reproducible. Not approximate. Not semantically close. Exact.
Vector search is powerful precisely because it finds things that are similar. It also means the result is built for ambiguity. But a credit workflow doesn’t want ambiguity. It wants the correct record, pulled cleanly, every time. SQL delivers that. Vectors don’t.
The decision to revert to traditional database queries wasn’t a step backwards. It was the developer applying 20 years of financial services logic to a technology choice. That’s domain expertise in action. And no amount of AI enthusiasm overrides it.
This is the insight the hype cycle consistently buries: the intelligence in AI implementation doesn’t come from the model. It comes from the human who decides where the model goes.
Meanwhile The Vibe Coders Were Having Fun.
A few weeks ago I attended a different kind of meetup. Vibe coders who build with AI prompts, exploring what’s possible, shipping prototypes at speed, and never checking the source code. The energy was infectious. The curiosity looked genuine.
But here’s the hard truth: not one of those sessions would survive a production environment.
Vibe coding is exploratory by design. You’re asking “what can this do?” rather than “what should this do, given these constraints, this compliance requirement, this failure mode?” Those are fundamentally different questions. The first is a sandbox. The second is a system.
The banking developers at the AI meetup weren’t less creative. They were more responsible, operating under tight compliance requirements. They’d already asked the exploratory questions, hit the walls, and made the considered choices. The SQL decision wasn’t a lack of imagination. It was the product of experience, of having seen what breaks in production, what auditors ask for, and what a maker checker process actually demands of a data layer.
Excitement is the starting point. Domain expertise is what’s required to finish the job.
So Is AI Probabilistic Thinking Dead in Modern Software?
This question deserves an honest answer.
I’ve built a data research system. By design, its outputs are probabilistic: predictive models, pattern discovery, ideation surfaces. The system is supposed to deal in likelihood, not certainty. That’s the entire point.
But here’s what makes it work: the data layer underneath it is fully deterministic. Data is empirical by nature. It either exists or it doesn’t. It either meets the quality threshold or it doesn’t. The intelligence of the system sits on top of a foundation that has no tolerance for ambiguity. You don’t build a probabilistic intelligence on a probabilistic foundation. That’s not a research tool. That’s a hallucination machine.
This is the distinction that matters, and it’s the one most people miss when they argue about whether AI belongs in production systems.
The question was never “probabilistic or deterministic.”
The question is: which layer are you talking about?
The data layer is deterministic. The logic layer is deterministic. The compliance layer is deterministic. The insight layer, where pattern recognition, language understanding, and analytical reasoning add genuine value, is where probabilistic models earn their place.
The banking developer knew this intuitively. His data retrieval had to be exact because the layer above it, that credit decision logic, had zero room for error. The vector search wasn’t wrong as a technology. It was wrong for that layer.
The Developer of the Future Isn’t Choosing Between Two Worlds
The future belongs to developers who are fluent in both modes, disciplined enough to build rigorous, auditable, production grade foundations, and creative enough to identify exactly where AI analytical capability changes what’s possible. Not everywhere. Not nowhere. Precisely where.
That requires something no model can generate on its behalf: domain knowledge, hard-won experience, and the professional judgment to know the difference between a sandbox and a system.
The real question for organisations investing in AI right now isn’t “how much AI can we add?”
It’s “do our people have the judgment to know where it belongs?”
That’s a training problem before it’s a technology problem.


