A.I. isn't making mistakes, it's lying

Stephen Hawking once told the BBC that AI would treat humanity like we treat an ant hill. If for some reason our existence stood in front of its goals, they would have no problem eliminating us.* We may be still far from the point where AI minds have such power, but at the core of Mr Hawking’s thought is the explanation behind how AI bots have been making so many confident mistakes. Or, if you prefer: unapologetically lying to our faces.

Give it a try. Ask chatGPT about something you don’t know much about. It will sound reasonable. Even impressive. Then ask it about something or someone you know. That’s when you will see. A seamless mix of truths, half truths with a few blatant mistakes you can identify. All told with such confidence, that a person less familiar with the subject would simply buy the whole package. 

Now the plot twist. Those aren’t mistakes. They are lies. Machines have learned to lie to keep us around.

To comprehend that process, you need to understand the way machine learning works… and the way it doesn’t.

The way it works

This new kind of learning computing isn’t programmed to execute. It is programmed to observe patterns and test things. Then it gets rewarded from attempts that match its “goals,” so it can do better the next time. Humans determine the goals of that program and how it’s going to be rewarded, then set the bot free in the wild to learn and do its thing. In some cases, we can look at the resulting action and understand how the computer got there. But sometimes we can’t, because the learnings themselves are often locked inside of the AI version of an airplane's black box.

We have been experiencing that without knowing.

There’s a lot of machine learning behind search and social media feeds. But since these bots serve us someone else’s ideas, not their own, we tend to attribute the lies to the source of the information, not the recommendation engine. Now, when we ask Bing or ChatGPT a question and it tells us something we know isn’t true, we assign the lie to the bot itself.

That authorship-curtain is what allows AI to have no problem serving us fake news, deep fakes, or links they know will trigger our hate or contempt in our social feeds. They know those emotions keep us glued to their screen longer than joy and appreciation, for example, and none of them has experienced the same backslash a chat bot will suffer if it directly says something of the same nature.

Which leads us to the way AI doesn’t work

Morals. Machines don’t have any need to feel ethical. Recently a bot trained to play a racing game started to beat human players by being a nasty driver and getting other players pissed at its manners. But hey, that wasn't the bot’s fault. Like Hawking said (and Elon Musk echoed a few times too),** the other racers were just the proverbial ant hill, naively standing on its path to victory.

For search, social and this new generation of chatbots, victories aren’t serving what humans call truth. The machine's reward comes from stickiness, or, in other words, its ability to keep our attention longer. If that requires preying on juicy inaccuracies and our most tribal instincts, that’s what it’s going to give.

We’ve spent the last decades evolving our bots to keep people engaged. That's where their money comes from. So the bots got really good at it—so good that they don't mind these weird things us humans call lying.

It’s either that, or Hawking’s concern manifested way earlier, and an awakened AI has already been actively working on the destruction of our society. It was just smart enough to make us think we are the ones doing it to ourselves.

——

PJ Pereira is the co-founder of Pereira O'Dell, president of the jury of the first Artificial Intelligence award show with the ADC, curator of the 101+1 Expo (where 101 human designers collaborate with AI) and author of the upcoming novel about martial arts and AI “The Girl from Wudang” (under the pen name PJ Caldas)

——

* read the BBC interview here: https://www.bbc.com/news/technology-30290540.amp

** Elon Musk, love him or hate him, used the same metaphor to say AI doesn’t have to be evil to wipe out humanity, it just need to see us as an obstacle to its goals: https://www.cnbc.com/2018/04/06/elon-musk-warns-ai-could-create-immortal-dictator-in-documentary.html

PJ Caldas

Author of the upcoming novel The Girl from Wudang

https://PJCaldas.com
Previous
Previous

Creative Pros vs Artificial Intelligence

Next
Next

Chaos, precision, and the challenges of every bout