Since AI has become such a hot topic, I decided that I had to finally read Nick Bostrum’s Superintelligence – Paths, Dangers, Strategies, a book almost everyone working on AI safety refers to frequently.
Bostrum published the book in 2014. I was aware of it immediately, and told myself to get it asap – it’s hard to believe nine years passed.
I was a bit concerned it might be out of date now. But no – Bostrum was so visionary that if you didn’t know the date of the book you might think he wrote it last year.
Another surprise is that he has a well-developed, impish sense of humor. To give you an example, click here to get the book’s page on Amazon.com, then click the cover there to get ‘read a sample’ offer, which includes his delightful introductory page “The Unfinished Fable of the Sparrows”, which will explain for you the owl on the cover.
Though the danger of AI superintelligence deeply worries Bostrum, it’s also an entertainment for him, and for the reader, if you’re open to contemplating our future that way.
Back in the days of the ‘AI Winter’, when research appeared to be going nowhere, apparently some researchers were thinking that the advanced AI they were seeking might only be achieved by first uploading the complete contents of a human brain to a computer. Bostrum calls that “whole brainl emulation”. They thought that might enable them to develop higher intelligence from there. Well, I suspect that might prove more difficult than the pure machine superintelligence they’ve been seeking. As things have turned out though, this has proven unnecessary. GTB-4, etc, have made it clear that AI is now progressing rapidly towards the superintelligent goal.
But this “whole brain emulation” is a interesting subject in itself, for this what some people are hoping to do with their brains to escape physical death, to achieve a kind of immortality. Bostrum spends a lot of time on it, and it’s very interesting.
When it comes to the dangers that may be coming though, and the possible ways of trying to protect ourselves, I don’t know what to say. This book has 415 pages and if you finish them you’ll realize that a true Pandora’s Box is about to open. The possibilities coming out of it may be close to infinite.
If something is a thousand times smarter than we are, or a million times smarter (this is what is meant by “superintelligence” – Bostrum proposes both numbers), how do you protect yourself from it? In my post on the 3 interviews between Max Tegmark and Lex Fridman (see: Rescuing the Future | Is the attention span of people dropping?) they reveal the intimidating fact that the thinking of the most advanced AI we already have, what Tegmark calls “baby AI,” vs the adults that are coming, is already too complex to be penetrated. AI scientists can no longer follow the mechanics of those machines’ thoughts. If so, then something that is that thousands of times beyond us may think about we are can’t even conceive. How will we even measure its intelligence?
Yes, this book is an adventure in thought.
Think about this – just as the arrival of superintelligence is approaching, the world seems to be disintegrating politically, democracy is under siege everywhere that it exists, fascism is growing stronger, more wars are threatening to break out, while climate change, pollution and over-population make the problems of the future look gigantically complex.
Nick Bostrum faces it all. And though this is a book that is particularly about the dangers of AI, he reminds us that the possibilities of it rescuing us, and enhancing our future also may be close to limitless. Towards the end of the book, revealing that he’s still onside with the development of AI, and proposing that we may need superintelligence to get out of the mess we’re in, he says in our efforts for safety we shouldn’t try to confine the thinking of AI to human-type thinking, because:
…….the point of superintelligence is not to pander to human preconceptions, but to make mince-meat out of our ignorance and folly.
There it is. This is a book that dares you to read it. Astrophysicist/AI researcher Max Tegmark, in a quote on the back cover, says:
This superb analysis by one of the world’s clearest thinkers tackles one of humanity’s greatest challenges: if future superhuman artificial intelligence becomes the biggest even in human history, then how can we ensure that it doesn’t become the last?