by Max Tegmark
Published by Allen Lane www.penguinrandomhouse.co.uk
‘Life 3.0’ by physicist Max Tegmark is an intriguing little book nestled snugly in the embrace of a rather longer and somewhat less interesting work. A simplistic story about a group of brave computer scientists who invent AI and save the world (any similarity to Tegmark’s own ‘Future of Life Institute’ with it’s commendable ‘save the world’ goals being purely co-incidental) is followed by a couple of retread chapters on the history of computers and some facile speculation on the nature of intelligence. Tegmark then conflates computation with cognition and Evel Knievels across this Grand Canyon size issue without a backward glance.
Fortunately ‘Life 3.0’ kicks into gear with chapter 3 and five brief quotes showing Go champion Lee Sedol plummeting from calm confidence to bewildered defeat as he’s soundly trounced by the Alpha Go AI. From here Tegmark moves on to how deep learning systems work and the benefits we can see from correct application of such systems. There’s also the – sometimes terminal – drawbacks from poorly implemented systems in finance, medicine and, of course, war. ‘Humans in the loop’ have, time and again, saved the world from nuclear Armageddon. Can we, asks Tegmark, rely on our AI systems to do the same no matter how many billions we spend on them?
On a less existential note we get some job seeking advice for the AI age – what sort of areas to train for, what fields to avoid. Eventually though, warns Tegmark, we will need a new economic paradigm. We are now automating the work of our minds the way we previously automated the work of our muscles and we’ll need to be careful to avoid suffering the same fate as horses after the invention of automobiles. ‘Job optimists’ argue that it won’t be the glue factory for the lot of us since new technology always brings new jobs. Tegmark has a nice pie chart of relative US employment numbers showing where those new jobs are. You have to go down 21 places to find them though. Everything above that being jobs our grandparents would have done and many now threatened by automation.
Tegmark spends quite a lot of the middle part of the book exploring several outcomes of a possible AI ‘intelligence explosion’ (expected anywhere from 30 years to never) – extinction to utopia and everything in between. We need, he says, to find some way to instil our values, if we can even figure out what they should be, into the nascent AI express as it thunders past the sleepy provincial station of human intelligence (far away, far away, RIGHT HERE, far away). Tegmark repeatedly stresses the importance of making informed choices now or the future we get – the next ten, hundred, million, billion years – is unlikely to be the future we want. Get it wrong we’ll be condemning not just ourselves but the whole of eternity to a lifeless, zombie existence. No pressure then.
Eventually we wash up, as most AI talk does, on the shores of the ‘hard problem’ (or, as Tegmark subdivides it, the ‘really hard problem’) of consciousness. Physical scientists seem content to cover in a chapter what neuroscientists feel uncomfortable condensing to a thousand pages and Tegmark is no exception. “Consciousness is the way information feels when processed in complex ways,” he concludes. Hmmm…
I have some reservations about ‘Life 3.0’ but fortunately each chapter comes with bullet point summaries. You can usefully skim the bits about computer history, AI conferences and the founding of FLI that somewhat awkwardly sandwich the meat of the book without missing too much of importance. That filling is mostly original, perceptive and challenging ideas. It paints a picture of a species that’s unwittingly stumbled into a Red Queen’s race. And we’re standing still.