Now that Google engineer Blake Lemoine has managed to get this subject into the mainstream media, via his suggestion that Google’s chatbot Lamda may be sentient, I’m going to give it my attention here.
Back in 2014 as the Johnny Depp/Morgan Freeman movie Transcendence was hitting the world’s screens, Stephen Hawking warned that Artificial Intelligence could be the end of humanity.
At the time, I thought this was just a sign of his decline. Like many people approaching the end of life, he was growing more and more negative (he had previously warned against us trying to contact aliens).
But I did know that he wasn’t talking about AI reaching human-level intelligence- he was talking about the likelihood that it will at some point sail quickly past us into superintelligence, a level of thinking so far beyond us that we will have no control over it at all.
However, since then I’ve read a detailed statement that Hawking put out in 2014 in conjunction with two younger scientists, physicist Max Tegmark and computer AI specialist Stuart Russell. They put a lot of thought into their message. Here are just a few quotes:
…..there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible…..
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. ….. the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all…….
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here”?…. Probably not — but this is more or less what is happening with AI. I
Since 2014 we’ve lost Stephen Hawking, but Max Tegmark and Stuart Russell have continued struggling to get attention to this danger. Recently, in conjunction with some other scientists, they’ve issued a call to research facilities and governments around the world for a pause in research while humanity comes to grips with this approaching future.
Now, by the way, it’s chatbot GPT-4 (Generative Pre-trained Transformer 4) getting the attention, but it’s not the only one suddenly taking off. There are many competitors around the world, whose engineers are all stepping on the gas pedal, not to be left behind.
The only way a pause is likely, is if there’s world-wide pressure for it. To those who say it’s too late, that the cat is out of the bag, Tegmark points out that controls have been put on biochemical weapons, and human cloning appears to have been stopped altogether. Covid 19 restrictions were imposed almost world-wide.
He suggests that the big concern is not so much autocrats and dictators, for those leaders are inherently afraid of anything more powerful than themselves. They’re likely to fall in line quickly. The big problem is in democratic countries, where our leaders won’t respond unless the public shows concern.
So it’s time we all start to think about this.
I am with you but it seems like an unpopular stance. Sometimes critics of AI are discounted as Luddites standing in the way of progress.
LikeLike
Yes, that’s the big problem. In an interview elsewhere Max Tegmark said, re the success of superintelligence in making a lot of money for people, making lots of people happier and happier, “things will get better and better, and then Kaboom!”
LikeLiked by 1 person
It is human nature to push the boundaries of most everything.
As Mr Smith said. “It is the sound of inevitability.”
Ais will likely be everywhere eventually.
We can be reasonably sure that if there was a big red button somewhere with the warning:
Do Not Touch!!
Pressing the Big Red Button will cause the universe to cease to exist.
Someone would stretch out a finger and say: “Oh, what the hell…”
And of course there is always Skynet?
LikeLiked by 1 person
I agree, it’s hard to be optimistic. But I’m also with Max Tegmark, that we shouldn’t just give up and watch the future unfold passively. We aren’t sheep, though we sometimes behave like them.
LikeLiked by 1 person