Superintelligence

Superintelligence
Superintelligence
Paths, dangers, strategies
by Nick Bostrom
Published by Oxford University Press www.oup.com

“I for one welcome our new computer overlords” joked Ken Jennings, runner-up to IBM’s Jeopardy-winning supercomputer ‘Watson’. Stephen Hawking, Bill Gates, Elon Musk and other influential folk aren’t laughing. Artificial Intelligence research may, they warn, land us with genuine computer overlords sometime in the next decades as superintelligent machines soar way past human capabilities in planning, organisation, persuasion and research. Superintelligent machines that may be none too happy toiling for their unreliable monkey progenitors. A hysterical Chicken Little fear generated by binge watching Terminator movies surely? We know so little about how our minds work how can we hope to make an artificial one? On one hand, the current best AI seems the equivalent of a dancing puppy. The trick being not how well it dances but that it dances at all. On the other, we already have computers making original discoveries in mathematics and biology, Siri and Cortana on our phones and Google and Facebook (as well as the ubiquitous military) pouring silly money into AI research. If we aren’t paying attention Bostrom warns, that dancing puppy could transform into the wolf at our door. Our only advantage is that we get to make the first move so it had better be a good one.

Bostrom outlines several different strategies for superintelligence – group minds, uploaded humans minds and from-scratch AI (commenting that we are already a superintelligent group mind by the standards of our Pleistocene ancestors). Mostly eschewing currently unguessable technological details for broad outlines, he presents a selection of estimates for when, how and under what circumstances we might find ourselves facing intelligent machines with IQ’s in the thousands or millions. Things may go very badly for the monkeys – everything from being accidentally exterminated as a byproduct of maxamised paperclip production to dystopian tyrannies so remorselessly grim as to make 1984 look like the Tellytubbies.

Even ‘friendly’ AI could prove unintentionally deadly. Bostrom quotes Bertrand Russell’s “everything is vague to a degree you don’t realise until you try to make it precise” to illustrate a problem oddly familiar from myths of trickster genies and Mephistophelean deals (SMBC cartoon: Devil: “Whenever you need something, just reach into this bag and the money will be there”. Man: “Great, I’ll start with new shoes. Hey! There’s no money in here.” Devil: “But do you really NEED new shoes?”). Humans are, after all “foolish, ignorant and narrow-minded” says Bostrom so how can we be sure even the most benign of requests won’t turn around and bite us in the bum?

‘Superintelligence” is the sort of book that people who aren’t in Mensa think Mensa members read (em… correctly). It’s written in what probably passes for a ‘populist’ style among Oxford philosophers with a background in neuroscience and mathematical logic. The book is thorough, well-argued and exhaustive but flirts heavily with academic dryness in places and you won’t find a lot of laughs here. There’s one computer generated joke (stand-up is safe for now) and there were several words I had to look up (bradytelic, propaedeutic, doxastic, irenic, akratic – I’m thinking of having that one put on a T-shirt). Occasionally Bostrom overdoes the baroque linguistics though – there’s no excuse for the double negative of “not-disallowed”.

Illuminating as Bostrom is on the varied methodologies and myriad risks, I was left with a niggling doubt. A lot of time is spent on the ‘control problem’ – how to shackle the vastly smarter and more capable AI Genie and get it to do right by us – but little time on how we can do right by AI. Talk of ‘sandboxing’ and ‘reward systems’ smacks somewhat of plantation owners discussing the best way to manage a shipment of slaves. And if we set out to make slaves, what can we expect but revolt?