Artificial Intelligence isn’t intelligent (and it won’t save us)

You cannot look anywhere over the last two years and not see something this or that AI. Many products are suddenly gaining AI functionality, often it is just a rebranding of an already existing feature. These days, it seems, AI means “the computer does something by itself”.

This, in the tech industry, of course is nothing new. Back in the 1990s there was a category called “telematics”. It was essentially the idea that you can automate repetitive processes. Why have a guy go and read a meter if you could just have the sensor be read by a computer and let you know when something was wrong? Or maybe even close or open a valve to adjust flow rates automatically, based on a preset value?

A few years ago this was rebranded as “Industrial Internet of Things”. Remember the “Internet of Things”? A decade ago this was the next big thing as a continuation of “cloud computing”, suddenly all our devices could be accessible from anywhere. Whether you wanted them to be or not. Then crypto and blockchain took over the imagination of marketeers, grifters and tech bros. In the present it is AI. It is supposed to make our lives easier. It is The Future™. Who doesn’t want to have their own personal assistant that does your work for you?

Let me come clean: I do use what is now sold as AI. I have a ChatGPT account and have used CoPilot in my day job. These can be useful tools if you understand their limitations and do not expect them to do (all) of your work for you. 30 years ago, if I needed to know something I had to find a book that had the information. When I did System Administration I had the “SysAdmin handbook” on my desk as a quick reference guide. Google and other search engines eventually replaced these books and now AI is, in a way, the next version of it. It can be useful and help with common tasks, but much like Google could, and does, send you to shady websites with bad information, so these tools can produce false results. In AI parlance: They hallucinate.

One of the biggest seller / features for AI is “summarization”. The capability of the service to quickly “read” a document (or documents) for you and then provide a succinct summary of the content. It is the one thing that Apple Intelligence is supposed to have as a killer feature in its current incarnation. When someone sends me multiple text messages the preview now is, instead of the last message, a summary of what they are saying. Likewise, notifications about emails contain a summary of what the email said (or if it’s multiple, what they supposedly say in aggregate).

This works, mostly, okay. But even Apple screws things up, as this story recently showed. Apple Intelligence screwed up so badly that the BBC felt the need to officially complain to Apple about it. There is a bit of an irony here that the screw up was related to Luigi Mangione, the accused killer of the healthcare CEO. I am sure many CEO would have loved for Mangione to have committed suicide.

As “funny” as this is, ChatGPT and CoPilot aren’t much better, and they are used daily in business. Over the last year I did test the different AI’s summarization feature. My approach was to feed my own documents into ChatGPT, CoPilot and Google’s NotebookLLM, a research tool, asking each of them to summarize documents for me.

They all made mistakes, sometimes getting a certain aspect wrong, other times they completely misfired and hallucinated things into the summary that weren’t in the original document. I fed it both fiction and non-fiction documents and they all made mistakes when they created their summaries. The biggest problem was things were omitted from the summary. Why? No idea, but as the AI didn’t really know what was important to me it was most likely guessing, based on it’s model, what would be most important to me.

To make it even worse, I asked a summary about the same documents multiple times and I never got the same answer. It seems the AI was just guessing what was important, or not at random. So each “roll of the dice” resulted in a different outcome. Considering the simplicity of the task, you would expect to get the same outcome every time based on the same input. Not with the current crop of AI though.

In computer programming, one of the tests you perform is against a standard data set. When you make changes to the code you run your test against this data set, knowing what the answer is supposed to be like. If your new code suddenly provides a different answer, then you know that there is a bug in the code. But with AI? It seems unpredictability is a feature, not a bug.

Likewise, asking it to create documents can lead to mixed results. Sometimes the output was good, other times it was for the birds. To be transparent, the tech companies themselves will tell you that your output vastly depends on your input, namely, what it is you want the AI to do. They have even created a job for this: Prompt Engineer. Where you learn how to “get the most out of AI”.

But even if you give it the same prompt, you always get a slightly different output. Never the same, at times I found that it changed the entire document structure from one query to the next.

Colour me skeptical, but true Intelligence shouldn’t require me to learn how to ask it questions. It should understand the concepts and if in doubt ask me follow up questions when things aren’t clear.

The simple truth is that what we are being sold as AI is not intelligence. These “AI models” will not allow you to suddenly do things you can’t already do. You cannot tell ChatGPT to create you a video game and expect to be presented with a full game. Regardless of how verbose your prompt is. If you do ask it to create a video game, but all it will generate for you are the steps required to make one. It will not actually write the code, create the art and gameplay for you.

Is this helpful? Maybe. Much like when you start writing you create an outline for the story, so knowing what the major tasks are to create a game can be useful. But if you don’t know how to write code, write a story / create a world or create the digital assets you need to make the game become a reality, you’re not really any closer to having made a game.

Likewise, for most knowledge based jobs, there are often so many specifics you need to know that the mostly generic answers that AI provides are, at best, useful as a framework, but not much more. You will not suddenly become an expert in a field you do not already have knowledge in. If you do not know what the end result is supposed to look like, you will not have a lot of success in asking the AI for help. Furthermore: AIs are being trained on large amounts of data, this creates a clear knowledge gap, especially around current events / changes. Depending on how old the data set is that it was trained on, it may have missed important changes to whatever it is you are asking it.

The reason for this is pretty simple: Large Language Models, which is what ChatGPT and CoPilot are, do not know anything. To vastly oversimplify things: They are large statistical models that guess on what the correct answer is. Or to quote from the Wikipedia link above:

The largest and most capable LLMs are generative pretrained transformers (GPTs). Modern models can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in

The more information you can feed it for the output, the better. When I go to Lucid.app and ask it to create a diagram for me I will either get a generic one, based on a template, or, if I give it the individual steps, it will draw it for me. That’s great. It’s often faster to type (or dictate) something than actually drawing it in an app. But that is not really intelligence. It’s just a straight forward instruction set that requires no real intelligence or knowledge. It’s a straight forward instruction set. “Do this, then do that, then that etc.”. Yet, it is sold as “AI”.

Many of the other AI solutions are similar. I use an app to record my emotional state, somewhere in the last few months it got an “AI assistant” that will allow you to “go deeper”. It is a modern version of Soundblaster’s Dr. Sbaitso.

What Dr. Sbaitso and many of the more modern and capable chatbots do is Mirroring, a well understood psychological behaviour we humans do:

Mirroring is the behavior in which one person subconsciously imitates the gesture, speech pattern, or attitude of another. Mirroring often occurs in social situations, particularly in the company of close friends or family, often going unnoticed by both parties. The concept often affects other individuals' notions about the individual that is exhibiting mirroring behaviors, which can lead to the individual building rapport with others.

These bots play on our idea of how a human responds, and with the program reflecting ourselves back at us, we see intelligence (and humanity) where it is not.

Beyond text based AIs, like ChatGPT, there are also now versions that can create images or even videos and music.

They all have an uncanny valley kind of look because at the end of the day, these systems do not have a concept of the world. They are trained on images and try to mimic these styles based on what prompt you give the computer.

When shooting a movie, lots of time is spent on how to light the scene or where to set up the camera, what focal length to use, where the focus is, what aperture is being used. How the camera and the actors move. None of the AI programs can do any of that. It can try and copy certain looks, but whereas in the real world I can adjust the scene until it is right, all I have with the AI is to slightly alter my prompt and…. Then I get a completely new scene, often with some unexpected components.

The same problem that applies with the mentioned summarization feature above applies here too. There is no permanence, the AI has no concept of the world it exists in and most of the time does not even have any awareness as to what it has created before.

This is very evident in one of the biggest promises of the last decade: Self-driving cars. Which, according to Elon Musk, will be coming this year since at least 2016. They won’t. Because as any engineer can tell you, the first 70% of problem solving are easy, and each additional percentage afterwards gets progressively harder to solve.

The reason why self-driving cars won’t be coming any time soon is because of all the edge cases. Example would be you drive down the road and see a ball rolling into the street. You have a concept of the world. You know that balls don’t just roll around. So there is a good chance that a kid will run after it in a moment. So you slow down or even stop.

What is a car to do? At best it recognizes an object rolling down the road, but as it has passed the lane it can continue on. Then the kid runs into the path of the car and….

Okay, so now the engineers have a data point that says: “If you see something that looks like a ball rolling across your path, stop.”. But what if it isn’t a ball, but instead some paper bag that looks like ball? The default mode should be: Fail safe. So stop the car. What if it was just a reflection that the camera or sensor mistook for a ball? It’s there for a second, but then not. Cancel the emergency brake and continue on? Or stop anyway? These are all decisions the model needs to make and apply.

This is a very obvious example and I am sure that particular edge case has been 99% solved. But it is also an easy one to envision. Next time you drive at night, really pay attention on how the world looks and how many decisions you make to act on or discard information you’re receiving / seeing. Most of these decisions are made because you know how the world works. The car does not.

To be clear, we humans suffer from problems too. We only really see a tiny part of what is in front of us, the rest is filled in by our brain. Did you see the bear?

This is not something the AI will struggle with, it “sees it all”, but seeing, and knowing what you’re seeing are two different things.

And this is going to be continue to the problem. Sam Altman can claim that we will see AGI this year, but I make the prediction that he will be no closer to it than Elon Musk will be to “Full Self Driving”, his product naming not withstanding.

Does this mean all these tools are useless? No. They can help you be more efficient / effective. They can also make us safer (e.g. pre-collision breaking in modern cars), but that is a far cry from what Google, Microsoft or OpenAI are promising us. AI can replace many repetitive tasks in the “knowledge industry”, similar to how robots and automation replaced many factory jobs. Yes, that does include programmers. Unless you work on edge cases / new things, odds are pretty good that a lot of your job can be done by ChatGPT or similar tools. People like Mark Zuckerberg are already dreaming as to what it will do to their share price.

On the flip side, what AI will, can and already does is make certain parts of our lives worse. Companies are already relegating many decisions to algorithms, these will all be rebranded as AI and because “the computer is smart” we are being told we will need to accept these choices, because computer says no.

On the flip side, we now have Meta and Mark Zuckerberg, and Elon Musk and others can’t be far behind, telling us that AI should be populating our social networks. The question is why? Aren’t these networks supposed to be about people connecting with each other? (This is a rhetorical question).

I have left social media behind by and large in the mid 2010s. I didn’t find a lot of value in it, if anything I realized all it did was made me sad and / or angry. But reading recently on how TikTok, Twitter and Instagram’s algorithms have completely messed up things as simple as the time line, the pushing of specific narratives depending on the settings pushed by the owner of the network, make it clear that we’re in for a rough time. We won’t be able to find each other because we will be drowned out in AI slop that only exists to allow the social networks to shove more ads into our faces.

So why add AI generated content to the mix? Because “number must go up” is what is driving the tech industry, as others have noted as well. More about this shortly. They care about DAUs (Daily Active Users) and “engagement times” and other metrics. The evaluation of many tech companies, especially social media, lies in how much time people spent on their service and by extension how much ads they can shove in front of your face, as this is the primary way most tech companies make money these days. They all have turned into ad delivery platforms. From Google, to Facebook and Twitter. Websites are overloaded and plastered with ads and often they also want you to subscribe on top. Surveillance Capitalism is big business.

In order to keep people coming back, you need more content, but humans are unreliable, they will create content when they have something to share, that’s no good for companies that require a constant stream of new content to get people engaged, ideally emotionally invested so they keep scrolling for that next dopamine hit, while getting flooded with ads. Algorithms are already pushing a kind of normative experience on people. Be it Spotify playlists / recommendations or TikTok’s fashion styles. Algorithms try to optimize engagement and keep you glued to your device in the most cost effective way for the company that provides the service. They flatten the human experience by homogenizing our tastes.

Most people already struggle with understanding the world they are living in. This AI generated slop will make things worse. This Nigerian Prince that needed your help to get the money out of the country? That was childs play. We may not yet have killed the internet, but we are very quickly rushing towards it. Why? As Ed Zitron puts it: Because the only thing that matters to Silicon Valley, and the markets in general, is that “number goes up”.

As he writes:

Our phones are beset with notifications trying to "growth-hack" us into doing things that companies want, our apps full of microtransactions, our websites slower and harder-to-use with endless demands of our emails and our phone numbers and the need to log back in because they couldn't possibly lose a dollar to somebody who dared to consume their content for free.

I can highly recommend his podcast (Better Offline) and his Substack. He rages way more eloquently at the current state of tech than I have the time to.

Before I close this out, Ed made another good point in one of his recent writings: The vast majority of us are being abused by the technology we are forced to use on a daily basis. I am lucky enough to be able to afford buying “decent shit”. I do not have to deal with the bottom of a barrel laptop or phone. I know how computers, networks, the internet as a whole, work and I am massively blocking a lot of crud on my network in my firewall. So my internet experience, in general, is much less abusive than the average persons. But even I cannot escape this experience completely: Recently my phone suddenly became slow to respond for several minutes. It was frustrating and irritating as I was trying to find some information I needed. Mind you, to me this happens so rarely that it actually stands out to me. But we need to realize that there are millions of people out there whose entire technology experience, day in and day out, is like that. It makes me weep.

Ed recently did an experiment by buying the most popular entry level laptop off of Amazon. Reading about his experience is…. Scary.

The picture I am trying to paint is one of terror and abuse. The average person’s experience of using a computer starts with aggressive interference delivered in a shoddy, sludge-like frame, and as the wider internet opens up to said user, already battered by a horrible user experience, they’re immediately thrown into heavily-algorithmic feeds each built to con them, feeding whatever holds their attention and chucking ads in as best they can.

I occasionally do get a whiff of this, when I use a computer provided to me by one of my clients that I use when I am not on my own network, or see a friend or acquaintance’s computer and shudder.

To be fair, this is not new, 20 years ago people got their web browser clogged up with all these plugin bars.

By all accounts the experience has only gotten worse, if different.

And this is before we get the onslaught of AI. The Dead Internet can’t be far off.

So no. AI isn’t intelligent, it is a giant con by the tech industry to push a marginally useful tool and abuse it for their own gain. That glorious future where the AI assistant is going to do all your work for you? It will never arrive. But what has already arrived is a shittier internet and computer experience for the sake of some numbers going up. At least Nvidia is making a nice profit.

The only people who will benefit from AI are the Mark Zuckerberg’s and Elon Musk’s of the world. They will use the threat of AI taking your job to keep suppressing wages and having people work longer hours to “keep up” with the AI.

Welcome to the Brave New World.