The secret of freedom lies in educating people, whereas the secret of tyranny is in keeping them ignorant. - Maximilien Robespierre.

Saturday, September 06, 2025

The Existential Threat Of AI Is Overhyped - Artificial Intelligence Is Incredibly Stupid

 

All the talk at the moment seems to be about Artificial Intelligence (AI). Will it destroy the global economy, will it put billions of people out of work because it can do their jobs faster, more efficiently and less expensively, will it enable total surveillance and micromanagement of all human activity thus empowering generations of tyrants, will it surpass our intelligence, will it turn on its creators and destroy us? 

True, there are thousands of fun things you can do with AI. It can fake your voice accurately and animate a photo of you, them send video messages that appear to be from you to complete starangers and make you look an utter twat, compose pleasant but boring, soulless music and paint pretty but similarly soulless pictures. It can edit videos with nice results for TikTok. It can write an instant, technically perfect (but soulless) poem or lyric of anything. It can instantly bang out an article on any topic. In every case, the results are delightful and very impressive.

And yet in every case, the results are obviously generated by a machine. Once you learn to recognize the telltale signs, it is unmistakable. And then the whole experience becomes boring and unimpressive.

People ask me if I as a writer felt threatened by this machine learning and instant prose generator. The answer is a simple but emphatic NO!. Good writing, well constructed stories and articles and good ideas comes from a spark that only the human mind can generate. No matter how sophisticated AI gets, it can never reproduce this. In fact, I find it amusing how bad this software really is. In simple terms, the people who wrote the AI software really have no idea what intelligence is or how it works (nobody really understands this,) and so can only work with the data string matching techniques programs have used since I loaded my first effort from punched paper tape to an English Electric LEO III mainframe in 1968.

The search Engine I use, Duck Duck Go has recently introduced an AI search assistant. Yesterday I wanted to buy a stand mounted food mixer with dough hooks and search assistent helpfully displayed in a text window an accurate description of what a food mixer with dough hooks is and does. Very nice, its a pity the thought never occurred to the A I engine that if I didn't know what a food mixer with dough hooks is and does I would not have been able to ask that specific question.

The AI generated information Search Assistant generated was in a way quite revealing however:  "A stand mixer is ideal for making dough as it provides consistent kneading with less effort. The Ankarsrum Original Stand Mixer is highly recommended for its unique design that effectively handles bread dough, while the KitchenAid models are also popular for their versatility and power," which told me that far from assisting my search, the software is an ad server (for ridiculously expensive equipment great for a restaurant or professional kitchen but totally inappropriate for my kneads (pun intended).  

What the software had done in that answer  in a way that mimics thought but without the slightest spark of any creativity, much less any hint of understaning the scope of the question. A sales clerk in a shop or a telesales person would have asked at once something like, "Do you need it for home use or a catering business?" In other words, AI is capable of astonishing feats data  collation and presentation  but utterly incapable of actual intelligence. It’s like a sophisticated parrot: seeming to speak English but in reality only mimicing noises without any understanding of what they may mean. Very similar to a fondly remembered British TV stunt from the 1980s featuring a talking dog (YouTube video). The dog could only say 'sausages' but at least dogs can apppreciate  sausages wheras an AI bot could not. 

Two CEOs of prominent A I startups, Sam Altman of Open AI and Anthropic’s Dario Amodei have invested a lot of money and effort to  hype a speculative and unproven hypothesis called scaling i.e.  training AI models on ever more data using ever faster hardware would lead to AGI (artificial general intelligence) or even *superintelligence* which surpasses humans.

LLMs (large language models whatever than means,), which power systems like Chat GPT, are in reality nothing more than pimped up up statistical regurgitation apparati, so hallucinations, inaccuracies & fallacious reasoning will continue to bedevil them.

To be sure, adding more data to large language models, which are trained to produce text by learning from vast databases of human-created text, helps the models improve, yes—-but only to a degree. Even significantly scaled, the models still don’t *catch on* to the concepts they are exposed to, or know anything about the concepts short of whatever predictive algorithms can lay at their doorstep, which is why the models like chad-gibbity, based on LLMs, botch responses. As I have written at least a hundred times, ' the great thing about computers is they can parse and filter millions of words of data in a few seconds, unfortunately a computer will never have and idea what a single one of those words means.

In other words AI and LLMs are not the stairway to superintelligence heaven.