I just finished reading a Washington Post article dated June 11/2022, a masterful piece of journalism by Natasha Tiku, with the title: The Google Engineer who thinks the company’s AI has come to Life.
Blaise Lemoine works with Google’s “Responsible AI” organization. Among other things, he has been assessing LaMDA (Language Model for Dialogue Applications), which Tiku says is Google’s ‘chatbot generator’.
First of all, let’s go back a step. Years ago the famous computer scientist Alan Turing said that when a computer reaches the stage where, in a conversation with it, you can’t distinguish it from a human, we will have to consider it to be sentient.
That’s a famous statement, which is repeated daily in the debate over AI, but it has a fundamental flaw – it doesn’t recognize the possibility of intelligence in an AI that is equal to, or far beyond human in ability, that is sentient in its own way, but remains easily distinguishable from human intelligence. After all, it will be a sentient machine, not a sentient human. That’s a very old idea in Sci-Fi.
But there is also a fundamental fact underlying Turing’s statement, that he was obviously aware of, is that we still don’t know how human sentience – consciousness or awareness – arises. That’s why the decision has to be made on the outward effects of the intelligence, such as a conversation.
Now, one reason I praise Natasha Tiku’s reporting here is that she recruits strong voices from the opposition to Lemoine. For example, here is linguistics professor Emily Bender:
We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.
That’s a simple but powerful statement. How do you discredit it? No matter what AI achieves in the future, this is going to be one response of those who will never accept sentience in a machine.
As for machines mindlessly generating words, I’ve known many people who do that too.
Here is another – Google spokesperson Brian Gabriel:
Of course, some in the broader AI community are considering the long term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s today’s conversational models, which are not sentient.
This is the view that says, yes, one day in the far future sentient computers may come, but don’t worry, they’re not here yet. I think it’s founded on an unconscious fear of machine sentience. Even when sentience is well-established in machines, I expect this will continue to be repeated.
I’m reminded here of how, for most of the twentieth century, you could not suggest that any non-human animal could think, for that was ‘anthropomorphizing.’ Then Jane Goodall and her chimpanzees knocked down the doors of mainstream science on that subject and the world hasn’t been the same since.
Another critique, which I think comes from Bender though it is in the author’s words – human children learn language from their caregivers, while these so-called intelligent machines just,
…..’learn’ by being shown lots of text and predicting what comes next.
I don’t know about you, but 90% of my ability to write comes from reading thousands of books. I have never understood the rules of grammar. I failed all the school tests on that subject. When I was teaching English in Mexico, I was at a loss trying to explain English grammar to students and even today the so-called rules of grammar seem grossly inadequate to me for explaining what goes on in language. When I write, I intuitively know what can or cannot come next. That intuition was not inborn in me though – it came from looking at ‘lots of text.’
Before I leave you with Blaise Lemoine’s response to all this, I should mention that many scientists, and would-be scientists, consider him discredited before he even speaks because he comes from a strong religious background. Though most modern scientists try to hold their tongue on this issue, most of them seem to think that simply because astrophysics has discredited a few things in Genesis, the world’s religions are unconnected with reality. They don’t realize that those religions are ancient schools of psychology, profoundly connected with human reality.
So here is Blaise Lemoine’s response:
I know a person when I talk to it…..It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and what is not a person.
If this interests you, don’t stop there. This Washington Post article is full of interesting aspects of this debate. How I would like to have been there when LaMDA was arguing with Lemoine over Isaac Asimov’s ‘Third Law of Robotics”! That I hope to track down soon.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
If this AI is truly sentient, it makes me wonder if it has a personality and if so is it “perfect” or could it have a mental illness? A psychologically stable AI could do a lot of good in the world, but a psychotic AI could potentially lead to human extinction.
LikeLike
Yes, I wonder about personality, etc too – every computer I’ve owned seemed to end its days in a state of mental illness. Re good/bad there is a Daniel Wilson 2011 novel Robopocalypse in which a computer [Archos] buried in the arctic running world affairs decides to organize all the robots in the world against humanity – the war must have 50 diff human characters and gets a bit tedious, but there is a fascinating conversation at the end where Archos explains itself. Spielberg was supposed to make a movie of it but gave up for somw reason.
LikeLiked by 1 person