AC WP RSCN4338 ENH2Reading John Brockman’s 2015 anthology, What to Think About Machines That Think, I was struck by this: The people who think computers will never get beyond human intelligence are almost never involved in AI research or engineering.

For example, here is Rolf Dobelli, author/entrepreneur/founder of the World Minds Foundation, who reassures us that independent super-intelligent machines are not even on the horizon, that we won’t see them in the next thousand years. His starts with this:

Conceptually, autonomous or artificial intelligence systems can develop in two ways: either as an extension of human thinking or as radical new thinking. Call the first “Humanoid Thinking,” or Humanoid AI, and the second “Alien Thinking,” or Alien AI.

He points out that almost all AI today is ‘humanoid’, just an accessory to our own thinking – phones, computers, etc. He does accept that these might become very intelligent – “AI agents might serve as virtual insurance sellers, doctors, psychotherapists,” but:

..such AI agents will be our slaves, with no self-concept of their own. They’ll happily perform the functions we set them up to do. If screw-ups happen, they’ll be our screw-ups……Yes, Humanoid AIs might surprise us once in a while with novel solutions…..But in most cases novel solutions are the last thing we want from AI …..Humanoid AI solutions will always fit a narrow domain. ……….

Now, there is a lot here to dispute. To start with, many AI researchers get their backs up at talk of robot/AI ‘slaves’. Their goal is super-intelligence that can get outside the box of human thought. They do want creative autonomous thinking.

Another problem is that Dobelli is mixing up the AI we have today with that of the future. He doesn’t realize that something we haven’t yet seen is just over the horizon. He thinks big change is too far away to worry about. But he does admit that he thinks about it:

Alien Thinking is radically different. Alien Thinking could conceivably become a danger to Humanoid Thinking: it could take over the planet, outsmart us, outrun us, enslave us …….What sort of thinking will Alien Thinking be? By definition, we can’t tell. It will encompass functionality we cannot remotely understand. …….. All we can say is that humans cannot construct truly Alien thinking. Whatever we create will reflect our goals and values, so it won’t stray far from human thinking.

This idea that robots and AI will either be our slaves, or we will be their slaves – that this is all that’s coming in the future, is a remarkably narrow view.

Why are we so put off by the thought of machines that are smarter than us? For thousands of years people worshipped gods who were smarter than us. Some still do. We used to want something superior to us, that could correct us. I’m reminded of French philosopher Simone Weil, who, in the midst of World War II when she herself would soon die of starvation, said “If there is no God, there is no hope.”

I look forward to superintelligent machines because they offer hope.

What Dobelli also doesn’t seem to understand is that the thinking of computers is already profoundly different from ours. In the second game of the famous chess match between Gary Kasparov and the computer Deep Blue, the computer made a move that made the audience gasp because it looked like the mistake of a simpleton. In his book Deep Thinking, Kasparov tells how he was fascinated by that move – he studied it and realized that it was the result of the machine seeing far far ahead, beyond what any human could do. It was a good move, he said – had he responded in a predictable way, it had the potential to win the game. Because of its purely linear thinking, which is routinely seen as a fault of computers, Kasparov says, Deep Blue’s move should be seen as creative, since no human could have conceived it.

Meanwhile, Dobelli seems to think that any thinking that gets too far outside the usual human box has no place in human affairs. I suspect that he would disapprove of writing like this post, created by someone on the spectrum. I suspect that the thinking of autistic people feels alien to him.

He seems to overlook that there have always been people who think very differently – for example, Newton and Einstein are both often put in the list of likely people from the past who could have fit Asperger syndrome. But both of them would have resented the idea that their radical thinking was ‘alien’.

So I don’t question that we must be wary with AI, but given that we’re currently doing so poorly at solving those problems we ourselves have created – climate warming, pollution, over-population, species extinction, chronic war, with thousands of nuclear warheads waiting in the wings – I think we need something more intelligent than us to save us from ourselves.

Calling these superintelligent beings who are on their way to join us, ‘alien’ before we’ve even met them, doesn’t seem like a good idea to me.

Leave a comment