2021 Torr Barren AC portrait2 croppedBelieve it or not, I started this post in 2017 – I put it off for 4 years because I knew that many people don’t want to think about the future.

But anyone who has read my novel Skol knows that I look forward to the entry of superintelligent robots and computers into our world. One reason why I keep myself as fit as possible (I’m 75 now) is that I still hope to one day meet and talk to a machine that is more intelligent than I am – and I mean much more intelligent.

Yet whether you listen to the mainstream media, or just talk to people you know, there is this strange comfortable belief in most people that machines will never be able to attain human level intelligence, so there’s nothing to worry about. Machines will never be conscious, never be able to conduct scientific research, never be able to compose music, never be able to practice law, never be able to make love, etc.

AI sceptics like to focus on the weaknesses of artificial intelligence. They love to show how computers are easily tripped up by language oddities, etc. They say nothing about the fact that Nature took hundreds of millions of years to develop our brains, while AI scientists have only been at it for a few decades. AI/robot brains are evolving thousands of times faster than the human brain did. Look at any language translation program now (Google’s free translator is one of the best) and you will see that the subtleties of language are no longer a problem for them at all.

Whenever machines are developed to deal with a specific task, they quickly surpass us. For example, the little pocket calculators that we began using in offices in the 1970s were already much much faster then at arithmetic than any human.

Chess playing computers passed me (In high school I was an active player) around 1985. IBM’s computer Deep Blue defeated world champion Gary Kasparov in 1996. Today you can buy software online for your laptop that can defeat Deep Blue easily.

Meanwhile – via the steam engine, combustion engine, nuclear-powered submarines, etc – machines have also surpassed us in strength and endurance.

Why most people can’t see that, sooner or later, it is probably inevitable that machines will exceed us in everything, is beyond me.

A lot of attention is given to the difficulty AI scientists have had in duplicating human intelligence. But there is no need to duplicate the human brain in order to surpass it in performance. There is a school of thought within the AI community that thinks the long pursuit of human/biological-style intelligence is a side-path that will prove to be a dead end.

The goal of AI research has never been the achievement of human-level intelligence. What do we need another human-level brain for when there are almost 8 billion in the world already? No, the goal has always been superintelligence.

When machines achieve human-level intelligence, rather than human-like intelligence, they will not stop there, but will rocket past us. The period in which humans and the best AI will be equal will probably be very short.

One reason why I’m dismayed by the reluctance of most people to think about this is that superintelligence is going to come with incalculable problems and dangers.

In some future posts I’m going to discuss this coming robotic/AI world. You’ll see that the deeper you look into it the more interesting it gets. But it is coming, whether we like it or not.

4 thoughts on “Rescuing the Future | Artificial Intelligence | Is Superintelligence coming or not?

  1. Excellent post. Are you familiar with the term “the Singularity”? This is the point when AI becomes equal to human intelligence, including consciousness, emotions, etc. At this level I find AI to be a bit frightening, but still manageable. However, I find the thought of AI super intelligence deeply concerning. Imagine the power of an intelligence millions of times stronger than our own. We could pose a threat to them due to our consumption of energy (electricity), which would be essential for them. We will be lucky if they don’t eradicate us, and rather, simply keep us as pets.
    As a side note, not many people are aware that China is making enormous progress in the field of AI. It is worrisome that an undemocratic regime might have such massive power at its disposal.

    Liked by 2 people

    1. Singularity – oh, yes – in fact I once did a post on Ray Kurzweil’s book ‘The Singularity is Near’ [and can’t find it now!’] – one thing that makes it a bit uncanny is that we won’t necessarily know when it happens. There has been speculation that it might arise on the web – search engine’s are peculiar beasts – and China? I think this race among nations to gain superintelligence is very dangerous – another proof [as if we needed one] that we need some form of world gov’t

      Liked by 2 people

  2. I’m glad you make the distinction between intelligence and human intelligence. I doubt that we’ll ever be able to develop machines to fully comprehend all the nuances of human languages, simply because it’s unlikely us humans will ever fully understand them. On the other hand as we better understand the process of learning and we pass those along to artificial intelligence, I can envision a future where it is capable of learning stuff we don’t necessarily understand. Personally, I hope we lean how to develop a form of artificial intelligence that does not imitate human intelligence and we develop ways to manage how that intelligence develops. My fear is that, as so often happens with human nature, we don’t understand the consequences of our own actions until too late – the mythical Pandora’s box. Global warming might yet prove to be such an event. I hope artificial intelligence doesn’t become one.

    Liked by 3 people

Leave a comment