Listen now
If you have not heard about ChatGPT you should remain under the rock where you have been and patiently wait until an AI-powered business out-competes you. Yes, we are in the midst of a peak buzzword and hype cycle, and like other fads, the current buzz may fizzle and peter out.
Regardless of your business, soon an AI-powered company will be in your industry, so at a minimum you need to take a hard look at where AI could benefit your operations and where it may out-compete you.
Customer service, software coding, enterprise automation, and marketing are just a few of the areas where your operations could be enhanced. These are areas where it is already proven that efficiencies and improvements can be powered by AI. That is as much as I want to contribute to the hype cycle in this post. But I believe what we see at present is very significant and will improve many things, experiences, and workflows.
What I am taking issue with, and what the headline of my post refers to, is the current media onslaught conflating AI advances with an impending takeover by superhuman machines.
So why has the debate reached such dizzying heights in a very short time?
First of all, let me be clear: the advances have been very impressive and seem to have accelerated. The latest versions of publicly available AI services at times appear ‘human like’ in output. No wonder that millions of people have gotten carried away.
Let me try to clarify what is going on and attempt to level set a bit on our current state of affairs. There are primarily two aspects I believe are important: 1) Incentives and 2) Anthropomorphism.
1) INCENTIVES: Who has skin in the game?
Microsoft
To me, it appears that there are at least two main reasons why Microsoft poured $10B into another company (OpenAI); 1) Microsoft believes they may finally gain an upper hand on Google in SEARCH and 2) AI can bring many Office products into the 21’st century. Therefore, they have been extremely busy drumming up as much frenzy and PR as possible. Google, on the other hand, has looked like a deer in the headlight and has been scrambling to regain its footing. Their latest attempt was the recent Google I/O conference where Google appeared a bit more composed.
OpenAI
Well, laying claim to the fastest-growing consumer service ever, is an obvious side benefit of having made some of the first widely useful AI-based services. OpenAI has a vested interest in keeping the hype cycle going to rapidly augment the value of their franchise.
AI research communities
If your work depends on grants or university funding, the more media interest there is, the better the chances of securing funding. So, when a journalist or podcaster asks you to talk about the advent of AGI (Artificial GENERAL Intelligence), which is widely understood to be a level of intelligence that is superhuman, you are highly unlikely to turn the offer down, or say that AGI is unrealistic.
Media
The media itself will continue building up the stories of all-powerful AI, robots, and machines that will become so advanced that we will have no way of stopping them. According to this narrative, we humans are just an unnecessary, unreliable, obstinate bag of meat that should be disposed of.
Business models
Lastly, but maybe the most important aspect of incentives and potentially dangerous misalignment, is a question of how services are monetized. The advertising sponsored model of social media has been a major culprit in the incentives to amplify sensationalist and often distressing information, images, videos, and more. The more viral, the more clicks, the more money a platform can garner. Whether the information has been fact checked or screened for malicious content seems to have taken a back seat in the search for growth and profitability. AI-powered services can assume many different business models, what we must look out for are the negative unintended consequences and business models can lead to.
2) ANTHROPOMORPHISM: Can machines have human qualities?
The second aspect of the debate is Anthropomorphism which in brief it means ascribing human like qualities to machines. There is a terminology around the AI conversation that is important to break down, so stay with me for a few moments.
The first word up is INTELLIGENCE. Machine Learning sounds more mechanical and less interesting, whereas Artificial Intelligence sounds like the next step into something that is human like.
Intelligence is a poorly defined term that has been used in a wide set of meanings, not only for humans but even for certain animals deemed intelligent. If you are a dog or a cat owner, no doubt you believe that there is some level of intelligence possessed by your dear pet. However, when we start with ML and go to AI and then to AGI, it seems like a natural progression that brings us awfully close to achieving general intelligence in machines. When in fact the leap from AI to AGI may be more akin to achieving interstellar travel when you have just invented the car!
As far as I can tell 1) nobody knows how far away AGI is, and 2) agreeing on what AGI really is also appears to be difficult. As always, such important details do not deter commentators from being busy predicting AGI to be just around the corner.
( If you want a corollary to my comment above, watch how a certain Mr Musk has been proclaiming Level 5 autonomous driving to be just a few quarters away since 2014 … :)
Neural network - Brain conundrum
The core technology in LLMs of ChatGPT is based on so called ‘NEURAL’ networks. Taking its name from a simplification of how we understand that our brain does computing. It refers to an algorithmic simplification of the core building block of the neuron in the brain.
Connecting many (millions, billions, trillions) of these simplified algorithmic neurons together creates the base of LLMs. Mistakenly in this leap, we forget that the brain is both a chemical and electrical ‘wet biology’ machine. The brain also appears to have both digital and analogue behaviours in how it connects and computes, and some scientists believe it also exhibits quantum effects in the way it operates. Lastly, the brain of all mammals is divided in two. For reasons we are slowly starting to understand.
Admittedly, I am not a neuroscientist, but I have written code and simulated algorithms in my past. Some of them were very sophisticated and used in building digital communications systems and predicting how radio waves propagate in complex environments. I developed the mathematics to simulate these systems. In fact, part of these algorithms were precursors to neural networks.
When explanations are simplified in media, it is very easy to take the leap from ‘neural networks’ to say, ‘it is like the brain’. Just because we have a computational architecture that is partially inspired by how the brain works, it does not mean that it is an exact copy of a brain, let alone simulating a brain. We are very far away from being able to simulate a human brain. And if we could, would we make a conscious machine out of that? Nobody knows.
When LLMs make mistakes there is a saying that they are ‘HALLUCINATING’. You can call it that, if you want to anthropomorphise your algorithm, or you can simply say it made a mistake or it was a fault. It sounds much less interesting.
According to the dictionary, the word hallucinating means: Experience an apparent sensory perception of something that is not actually present. Does the algorithm possess sensory perception? I don’t think so. It simply makes a mistake in its prediction of the next word or sentence.
Will consciousness arise?
There are plenty of discussions going on as to whether a LLMs or GenAI is truly sentient or conscious. Part of the argument is built on the ideas of the neural nets. The argument, which refers to complexity theory, is that if you just make them sufficiently complex, then, voila, a sense of being in the machine, or something like consciousness will arise. As far as I know in the podcasts I have listened to and from what I have read, these conversations are happening in all seriousness.
Now, if somebody solves the riddle of the emergence of consciousness, they/she/he will be the most celebrated scientist(s) ever in human history. Could consciousness emerge by accident without preplanning? Why not? When science advances we tend to discover new phenomena that needs to be explained, however, I have a very hard time understanding that consciousness will suddenly arise out of algorithms in our current LLM’s just because you add connections. I am not questioning the advances that have been made within AI in the last years, but I do not see any explanation for why consciousness should emerge, let alone the fact that nobody yet has been able to explain why it emerges.
The imitation game
When ascribing intelligence to Chat GPT4 and LLMs the ‘Turing Test’ has often been quoted as a reason for why GPT4 should be called intelligent. The test was devised by Alan Turing in 1950, and it is a test of a machine’s ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human. Originally Turing called it the ‘imitation game’. In computer science circles it is often used as a measure of intelligence. Now, the important words here are ‘exhibit intelligent behaviour’, which is equated to ‘being intelligent’. They are two different concepts, but the misconception comes back to the question of anthropomorphism.
At the core, LLMs are designed to predict on a statistical basis the next word in a sentence, or the next pixel in an image. The latest versions of Midjourney and OpenAI’s GPT does so with astonishing results. They may pass the Turing test; however, it does not make them intelligent in my view. It makes them astonishing prediction machines that ‘exhibit intelligent behaviour’.
AI Everywhere era
As we are hurdling into the new world of AI everywhere I hope that my clarifications around incentives and anthropomorphism will help you think critically about what you come across.
AI can be a step change advance for much of what we do as humans, but as always, we need to carefully work through how we avoid harm, how we keep our children safe, how we keep our critical infrastructure and life-supporting devices safe and how we create new and better jobs for everyone.
For more articles like this, follow Bo on Medium where he regularly publishes his thoughts.