Stop Calling Everything AI: A Rant From Someone Who Builds This Stuff

Stop Calling Everything AI

If I have to sit through one more product demo where someone calls a simple if-else statement "AI-powered," I might lose what's left of my sanity, which, granted, wasn't much to begin with after a decade of watching the software industry discover the same design patterns over and over again while pretending they've revolutionised computing, but this current trend of slapping "AI" on everything from autocomplete functions to load balancers has crossed the line from marketing hyperbole into outright fraud.

I build software for a living, and I've been working with actual machine learning systems since before everyone decided that ChatGPT was going to replace human thought, and let me tell you something: most of what gets sold as "AI-powered" these days is about as artificially intelligent as my morning coffee routine, which, while admittedly more sophisticated than the decision-making processes I see in many enterprise applications, does not constitute artificial intelligence by any reasonable definition.

If-Statements Are Not AI

Let's start with the most egregious offender: calling conditional logic "AI," which is like calling a hammer a "gravity-powered demolition system" or describing walking as "human-powered transportation optimization," because apparently we've collectively decided that using precise terminology is less important than making our software sound like it was crafted by some benevolent robot overlord rather than a developer who copy-pasted it from Stack Overflow.

I've seen CRM systems advertise "AI-powered lead scoring" that literally just adds up point values based on predefined criteria, marketing automation platforms boast about their "intelligent targeting" that amounts to basic segmentation rules, and deployment tools claiming "AI-driven optimisation" that simply check if CPU usage is above 80% before scaling up instances, which is the sort of logic we've been implementing with shell scripts since the Clinton administration.

Here's a simple test: if you can implement your "AI" system with a series of if-then-else statements, a lookup table, or a SQL query with some basic arithmetic, it's not artificial intelligence, it's just software doing what software has always done, which is following instructions written by humans who hopefully understand what they're trying to accomplish.

Rule Engines: Still Not AI, No Matter How Complex

The next tier of AI washing involves taking traditional rule engines, adding a fancy user interface, and pretending that somehow makes them artificially intelligent, which is like putting a touchscreen on a calculator and calling it a quantum computer, because apparently the complexity of the rule set determines the intelligence level of the system, rather than, you know, the actual presence of machine learning algorithms that can adapt their behaviour based on data.

I've seen business process automation tools that let you create decision trees with hundreds of branches, fraud detection systems that apply dozens of risk factors to calculate scores, and recommendation engines that use collaborative filtering to suggest products, all of which get branded as "AI solutions" despite being deterministic systems that produce the same output for the same input every single time.

Don't get me wrong: complex rule engines can be incredibly useful and sophisticated pieces of software that solve real business problems, but calling them AI is like calling a recipe book a "culinary intelligence system" just because it contains detailed instructions for combining ingredients in specific ways to achieve desired outcomes.

What Actually Counts As AI (It's More Specific Than You Think)

Actual artificial intelligence involves systems that can learn, adapt, and make predictions or decisions based on patterns in data that weren't explicitly programmed by the developers, which means you need machine learning algorithms, training data, model parameters that get adjusted during the learning process, and some form of inference engine that can handle novel inputs it hasn't seen before.

This includes things like neural networks trained on large datasets to recognise images, natural language processing models that can understand and generate human text, recommendation systems that use collaborative filtering or matrix factorisation to find patterns in user behaviour, and reinforcement learning systems that can optimise their strategies through trial and error.

The key difference is adaptation: real AI systems change their behaviour based on experience, while rule engines and conditional logic systems do exactly what they're programmed to do, every time, forever, until someone manually updates the code, which is a fundamentally different paradigm that shouldn't be conflated just because both approaches can automate decision-making.

Why This Matters (Beyond My Personal Sanity)

The reason I'm ranting about this isn't just because I'm a pedantic developer who gets annoyed by imprecise terminology, though I definitely am that, but because this widespread AI washing is actively harmful to the industry and to organisations trying to make informed decisions about which technologies to invest in and how to solve real problems with software.

It Makes Real AI Evaluation Impossible

When everything gets labelled as AI, it becomes impossible for non-technical decision makers to evaluate what's actually using machine learning and what's just traditional software with marketing spin, which leads to procurement processes where teams end up paying enterprise prices for glorified spreadsheet functions while missing out on tools that could genuinely benefit from artificial intelligence capabilities.

I've seen companies spend months evaluating "AI" chatbot platforms that turned out to be decision trees with natural language interfaces, invest in "machine learning" analytics tools that were just reporting dashboards with trend lines, and deploy "intelligent" monitoring systems that were essentially glorified alert rules with better visualisation.

It Sets Unrealistic Expectations

The other problem with AI washing is that it creates unrealistic expectations about what these systems can do, which leads to disappointment when the "intelligent" software behaves exactly as predictably as any other rule-based system, doesn't adapt to changing conditions without manual intervention, and fails to handle edge cases that weren't anticipated by the original developers.

Real AI systems have limitations, require training data, can make mistakes, and need ongoing maintenance and monitoring, while traditional software systems have different limitations, require clear specifications, make predictable errors, and need different kinds of maintenance, and conflating these two categories makes it harder to set appropriate expectations and plan for successful implementations.

What Actual AI Engineering Looks Like

Since I spend my days working with both traditional software systems and actual machine learning implementations, let me describe what real AI engineering involves, so you can recognise it when you see it and avoid being fooled by marketing departments who think adding the word "intelligent" to feature names counts as artificial intelligence.

Actual AI engineering involves data collection and preparation, which means gathering representative training datasets, cleaning and labeling data, handling missing values and outliers, and creating validation sets to test model performance, followed by model selection, hyperparameter tuning, and training processes that can take hours or days to complete.

Then comes the real work: testing model performance on unseen data, analysing failure cases and edge conditions, implementing monitoring systems to detect model drift and performance degradation, setting up retraining pipelines to handle changing data patterns, and building robust inference systems that can handle production loads while maintaining reasonable latency.

This is fundamentally different from writing conditional logic or configuring rule engines, requires different skills and tools, involves different trade-offs and failure modes, and produces systems with different operational characteristics and maintenance requirements.

A Modest Proposal: Truth in AI Advertising

Here's my modest proposal for bringing some sanity back to technology marketing: if your system doesn't learn from data and adapt its behaviour over time, you can't call it AI, just like you can't call a bicycle a motorcycle just because it has wheels and gets you from point A to point B.

Instead of "AI-powered," try "rule-based," "automated," "algorithmic," or "logic-driven," which are perfectly respectable terms that accurately describe what your software actually does without implying that you've somehow achieved artificial general intelligence in your customer relationship management system.

And if you actually are using machine learning, be specific about it: mention the algorithms you're using, describe the training data and evaluation metrics, explain how the system handles new scenarios, and provide information about model performance and limitations, because people evaluating AI solutions deserve to know what they're actually getting.

The software industry has enough genuine innovation happening without needing to rebrand basic programming concepts as artificial intelligence, and customers deserve accurate information about the tools they're considering, whether those tools use machine learning, rule engines, or just really well-organised if-statements.

Now if you'll excuse me, I need to go update my LinkedIn profile to describe my coffee-making skills as "AI-powered beverage optimisation engineering," because apparently that's the world we live in now, and if you can't beat the marketing departments, you might as well join them in their relentless quest to make everything sound more impressive than it actually is.

Ray Timmons

Ray Timmons

Head of Platform Development at Podsphere. Over a decade of experience building systems that actually work, when the stars align and the coffee is strong enough.