Human Compatible

human-compatible-small
Human Compatible
AI and the Problem of Control
by Stuart Russell
Published by Allen Lane www.greenpenguin.co.uk

Let’s be clear, Stuart Russell, Computer Science professor at UC Berkeley and Oxford, doesn’t think we have to worry about malevolent robot overlords, sentient Internet or Terminators any time soon. Why waste time worrying about the destination he asks, when the journey is so full of peril?

We’re already experiencing some of those perils in the unintended consequences of filter algorithms contributing to the growth of political polarisation and tribalism. Next up will probably be how our self driving cars make their decisions. Or maybe it will be the rules governing the behaviour of autonomous military robots. Or possibly corporate planning algorithms or medical research or, or, or…

These sort of partially intelligent AI systems will infiltrate more and more of our daily lives in the coming years and decades simply because of their huge potential (an extra $674 trillion per year, or a tenfold increase, in global GDP suggests Russell). AI systems that may not be conscious but are motivated and goal-orientated with an astoundingly greater grasp of situational variables than humans and vastly greater ability to model, and act on, those variables into the future.

Some have suggested putting the AI in a box or just turning off the power if we start having problems. Russell isn’t overly optimistic here, pointing out that a truly intelligent machine will be able to find work rounds for anything our monkey brains can come up with. After all, we can’t build a firewall that keeps other humans out let alone an AI. He’s not even on board with the solution of merging with AI by directly connecting our brains to their silicon. “If humans need brain surgery merely to survive the threat posed by our own technology, perhaps we’ve made a mistake somewhere,” he suggests.

Russell’s solution is to make sure the AI’s goals and ours are aligned before flipping the on switch. You don’t want your self-driving car mowing down pedestrians to get to your destination nor your smart kitchen eyeing up the caloric content of the cat (the shop is 20 minutes away, the cat is right there). However, as Russell’s version of AI becomes more powerful, it starts to sound like a really dumb genie. Huge potential benefits along with lots of opportunities to completely fork things up. Ask an AI to make you happy and you may end up on a permanent heroin drip. How can an AI know what we really (really, really) want when we hardly ever do? What we want at 15 is rarely what we want at 50 and often what we want may not be what we want to want, or indeed, how we want it.

All, however, is not lost. Russell has followed Asimov’s lead by coming up with three, well, not laws as such. They’re principals, suggestions, directions of thought that AI researchers should explore when building AI. They mostly centre around AI being altruistic, humble and learning about human wants and needs from observation. If that last is raising some red flags for you, you’re not alone and Russell explores a bunch of the caveats and provisos implied. “We will need to add ideas from psychology, economics, political theory, and moral philosophy”, he says. Honestly, it seems like a lot to squeeze into an AI. The big problem though, and one Russell only lightly touches on (perhaps because its more political than technological), is making sure everybody sings from the same hymnbook. Asimov’s three laws of robotics were universal because in those stories there was only one company making robots. It’s unlikely that there will be one company or country making superintelligent AI so ensuring there is a universal set of ideals installed in each from the get go will be a challenge.