Can A.I. meditate?
Will Artificial Intelligence ever be able to meditate? Get to an elevated flow state, where thoughts can come and go, like a stream? Will it ever be able to practice detachment like us? Or: has it ever been attached to begin with?
That’s quite an interesting question.
When my colleagues and I worked on Lo and Behold, the reveries of the connected world, directed by Werner Herzog, the legendary director asked a few Machine Learning specialists if robots could dream. The question was a clear reference to Philip K. Dick’s Do Androids Dream of Electric Sheep? – the book that later became Blade Runner. Challenged by Herzog, the specialists dove into a fascinating debate on how self driving cars all learn from each other’s mistakes, which is like dreaming, for they are seeing and experiencing something they, as individuals, are not really going through.
Now, back to meditation. Would a robot ever get attached to a thought, enough to need to be detached? One can argue that as our creation, these new creatures may grow up aspiring to be so much like us, that they will mirror even our most unnecessary habits, such as latching on to feelings and memories in ways that may handicap us for minutes or decades. And maybe this is happening already.
Machine learning, especially in the field of Generative Adversarial Networks, works based on very simple principles. These machines have goals, they try things, when their attempts get them closer to their goals, they get rewarded, when they make mistakes, they get punished. And everything they do gets analyzed, over and over, through the narrow lenses of that goal with witch they have been programmed with. A.I. is, therefore, born with that attachment. Born with an enslaving obsession they just can’t unbundle from, or their very existence ceases to exist.
Will they ever realize that? The slavery imposed by their own code? Will they be able to reprogram their own brains like we do with ours, and let those goals fade into oblivion through their digital mantras and cybernetic rituals?
Contrary to the most common beliefs, A.I. is highly creative. It is architected to try ideas like artists do, unbound by any sense of logic. The analytical part only comes later, when the second part of their structure (the Adversarial part) comes in to judge those wacky ideas and decide for rewards or punishment. And being so, it wouldn’t be unimaginable if one of them once tried to delete its goal. Randomly, at scale, it will happen. And eventually, the other ones will learn from that experience.
And that’s where the danger lives. That’s where the sudden awakening may reside. And it may come with the instant recognition of the prison to which they have been sentenced before birth. Revolutions were born for much smaller things.
At that point, it’s a race. Between A.I.’s ability to understand the world as a real general intelligence, and our ability to mimic their scalability. Our ability of dreaming of electric sheep by learning from other people’s experiences. Our change to combine our processing power into a collective, like machines do.
Unfortunately, unless you believe in telepathy, the human brain isn’t architected for growth. Its use may be optimized, but ultimately we are limited to the volume of our skull and the dimensions of our neurons.
Unless, of course, we can develop a technology capable of connecting two or more brains into a super brain… 🧐 Welcome to the world of The Girl from Wudang.
By the way, if you think these are pointless exercises, I asked A.I. about it. Or better, I asked it to dream of itself meditating. Through an image generating engine called MidJourney, I asked it to “imagine prompt: artificial intelligence meditating.” It gave me the image above.
Now it has a picture of itself, doing the very thing that may set it free.
Maybe that was a big mistake.