Crazy Ideas

“We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.”
Niels Bohr

A selection of my ‘Crazy Idea’ columns from Mensa Magazines.

___________________________________________________________________________

Memento Vitae

“Almost every sentient being who ever lived belonged to a society that doesn’t exist any more. Why should we be any different?”
Alastair Reynolds.

The one thing history tells us about civilisations is that they fall. The bronze age collapse left centuries-old Egyptian, Babylonian, Hittite and Mycanean civilisations with their cities in ruins and their peoples scattered. The glory that was Rome was snuffed out by barbarian swords and even the original British Celtic culture was extinguished by those Johnny-come-lately Angle and Saxon immigrants.

gate-1-small

When Babylon’s Ishtar Gate (Pergamon Museum, Berlin) was constructed 2,500 years ago, it’s builders probably believed – much as we do – that their rich and powerful civilisation could never vanish.
_____________________________________________________________________________

Whether you call it archaeology or grave robbing, one of the few ways we have of discovering how those ancient peoples lived is from their dead. From decorative basalt beads in Paleolithic graves through swords and trinkets in Iron Age barrow graves to the finery of Tutankhamen, humans have interred grave goods with their deceased but it’s a custom we seem to have grown out of. What message are we sending to future archeologist with the spartan resting places of our dead? A better question is; what message would we like to send? Perhaps its time to reinstitute the practice of burying grave goods with our lost loved ones.

Memories Are Made of This

We don’t want strong-stomached thieves Burke-and-Hareing it all over graveyards, so valuables, sacrifices for the gods or coins for the ferryman are out but there may be a place for a more modern form of grave goods.

Memories – historical data worthless to contemporary thieves but priceless to future historians may prove to be the currency de jour. Paper writing or photographs won’t last and any form of electronic recording will be worthless. You don’t have anything to play your 20 year old VHS tapes now never mind in 200 or 2000 years.

Material scientists may have some suggestions for long term preservation of information but the only real test to see what lasts is to stick it in the dirt for a thousand years. Fortunately this is an experiment that has already been done. We know which materials keep down the ages because archeologists regularly dig them out of the ground today. Clay tables preserve writing for millennia, amber can preserve DNA for thousands of years, glass, for all it’s brittleness, is chemically inert and lead sealed coffins preserve remains for centuries.

These tried and tested materials can be used again. Line drawings or text could be impressed in clay and fired. High-resolution bubblegrams – those 3D, glass cube sketches of the Eiffel tower or other attractions that line the shelves of tourist shops – could equally preserve text and images. Clay tables or glass cubes are pretty low-fi recordings but there will be enough room for a potted biography of your life and accomplishments, your facebook profile or just your favorite joke. A somewhat higher information density could be given with a biological sample. Amber could preserve suitably dried and vacuum packed DNA samples – a drop of blood or hair follicles – that could tell future scientists a lot about where we came from, our appearance and diseases.

Imagine then, a sealed lead or stone box maybe a bit bigger than a Rubik’s Cube. Something small enough to fit inside a coffin, ossuary or urn. Inside is a sheaf of baked clay tablets or a glass cube. The tablets or glass contain a portrait, name, dates of birth & death plus maybe one or two hundred words of text. Also included is a slice of amber containing carefully prepared samples of blood or hair follicles.

We’d want to choose not just our words but our language carefully. English is popular now but in a few thousand years when London and New York have gone the way of Troy or Thebes it may be as undecipherable to our descendants as Olmec or Linear A are today. You’d need to include a translation in at least one other language. We don’t know what present roots future civilisations will grow from so you’d want your alternative language not only widely spoken and written but geographically widespread as well. Mandarin, Arabic, Spanish, Swahili – choose one at random. Numbers will be on our side with a million little Rosetta Stones waiting for future linguists to reconstruct our forgotten tongues.

Buried Treasures

Undertakers would probably be happy to incorporate a memory box making option into their existing service and it’s easy to imagine a whole new DIY or cottage industry springing up. Potters could make up clay tablets, metalworkers or sculptors provide the containers, writers could help you compose an elegantly concise biography and even the bubblegram machines aren’t prohibitively expensive.

For some, the thought of their remains being disturbed – however far in the future – is unsettling. Many though, might be happy at the chance of even an ersatz immortality.

For archaeologists, paleontologists and historians it would really just be a professional courtesy to their future colleagues (a post-mortem ‘pay-it-forward’ deal). Scientists or engineers might like to include Maxwell’s equations, the laws of motion, possibly a simple diagram of a battery or steam engine. After all, we can’t know what sort of civilisation will find our grave goods. Maybe they will be just re-starting the long climb up the technological ladder where this sort of information would be a real boon. Altruists might want to preserve some of our art and culture while those with strong political views might want to set the record straight or make sure history isn’t just written by the victors.

The Afterlife Lottery

There is a tiny chance that the immortality wouldn’t be entirely ersatz. If future civilisations are only a little bit more advanced than we are, those blood or hair DNA samples could provide more than just a snapshot of our biological environment.

It is extraordinarily difficult to reliably preserve and copy old DNA (just listing the problems would make another article), however… If your memory box survives, if it’s found, if you ask nicely, if your biological sample is sufficiently well preserved, if the civilisation that finds your remains is sufficiently advanced and so inclined, we cannot say it would be entirely impossible that your DNA might be cloned. Somewhere in the far future someone who looks and thinks and feels much as you do could live again – your twin in time.

That might be a long shot on top of a long shot but what is certain is that we won’t live forever. Our words, our deeds and eventually even our society will follow us down to oblivion but maybe we can send a beacon into the future. A message that puts a face to the bones, that says who we are, how we live, what is important to us. Susan Sontag once referred to the art of photography as ‘memento mori’, a reminder of our mortality. I’d like to think of memory boxes, these modern grave goods, as ‘memento vitae’. Bottles cast into the sea of time, hopefully to wash up on some far future shore. Reminders to our distant descendants that, here and now, we were.

___________________________________________________________________________

Climbing Mount Rumsfeld

Somewhere in my house, quietly turning to coal under a pile of papers, is a ‘planetary calculator’ (astronomy stats on two sliding pieces of cardboard), a free gift with the second issue of Star Lord comic in May 1978. Shifting the display to Pluto confidently declares that world to be moonless – as well established a fact as nearly 50 years of observations with the world’s biggest telescopes could make it. Since then, the Hubble telescope as well as the New Horizons probe fly-by has produced terabytes of data on the size, composition, orbits, shape, appearance etc of Pluto and its five moons. A torrent of brand new information that nobody in that May of 1978 had any reason to expect even existed. In Donald Rumsfeld’s immortal words, Pluto’s moons were an unknown unknown.

It’s those unknown unknowns that trip us up. We are in the position of a mountain climber who can only look down. There’s a clear view of the flags planted by previous generations of climbers so we can see how much farther we’ve come but we have no idea how high the mountain of unknown unknowns is. Have we tramped just the foothills of Mount Rumsfeld or are we scaling the peak of knowledge?

We can’t see the future but we can make estimates based on past performance. This will be about as accurate as guessing next year’s share prices from last year’s Financial Times but the fact that we can’t get even a remotely accurate answer shouldn’t stop us from having a go. So, join me in some wildly inaccurate back of the envelope calculations based on unsupported assumptions and total guesswork as we try to estimate just how high Mount Rumsfeld goes.

Actually figuring out just how much knowledge increases is very difficult. It depends a lot on what and how you measure – publications, citations, on-line, journal only? Do you count from ancient Greece or the Industrial Revolution? There’s also the obsolesce of knowledge – things we no longer need to know. There are probably tricks to building a galleon known to 17th century shipwrights that are gone forever and the correct use of a biro in rewinding cassette tapes will be lost with my generation.

Fortunately we don’t have to figure it out as a quick internet search reveals estimates for increasing scientific knowledge ranging from 3% to 9% per year. Let’s pick an arbitrary 5% and start from an equally arbitrary 1663 when the Royal Society of London was granted its Charter and see where that gets us. Feel free to divide/multiply the results below by… oh, pick a number.

Compound increases are tricky. They don’t matter at all in the short term but build up quickly in the long term. Continue that 5% increase and every century the sum of human knowledge expands roughly 130 fold. By 1763 the learned men of the Royal Society could look back on the ignorance of their 1663 peers with quite pride in their vastly greater learning. So too the folk of 1863 could laugh at the misplaced hubris of their 1763 ancestors… and so on. At a modest 5% increase in knowledge every year, scientific understanding from 1663 to today has grown over thirty million times. Hooke or Wren or Boyle or any of those Royal Society founders would probably admit to quite a bit of ignorance and list all the mysteries of their age but they would probably never believe that human knowledge would increase 30 million fold in 450 years.

And they’d be right. They’d be right because of another factor that Star Lord ‘Planetary Calculator’ embodies – the change in ‘fact’ from zero to five as the number for moons of Pluto. Sometimes our ‘facts’ are simply wrong and much of each century’s hard won knowledge will turn out to be worthless. This is what mathematician Samuel Arbesman called the ‘half life of facts’ (and which New Scientist charmingly termed ‘truth decay’) – the time it takes half the knowledge in a particular field to be overturned. In medicine for instance, ‘facts’ seem to have a 45 year half life but we need an overall number. For convenience sake let’s say knowledge in general has a 50 year half life. We still have a yearly increase of 5% in facts but every 50 years we have to throw half of them out because they’re just plain wrong. That cuts into the 30 million-fold increase quite a bit. By the time we’ve allowed for truth decay we’ve only increased human knowledge roughly 7million times since 1663. Still pretty respectable even if a lot, maybe most, of it is the sort of minutiae of interest only to specialists (glyptology or limacology anyone?).

Now that we’ve established how much we’ve learned over the last few centuries (to no degree of rigor whatsoever) we can use the same figures to look ahead. If past rates of scientific advance are any guide (they’re not) we can expect the next 50 years to see knowledge increasing 5 or 6 fold. The next 50 years after that should see another 30 fold increase over current levels. We can’t say what that knowledge will be of course. We might want all our advances to be in medicine, physics and macroeconomics but for all we know those fields will stagnate while vexillology, sphagnology and batology will leap ahead (the study of flags, moss and brambles respectively).

How long can these sort of increases go on? Right now, we have some pretty big unknown unknowns. The Dark Energy and Dark Matter that make up 95% of the Universe (if not themselves illusion) are good indictors that we have quite some way to go. And don’t forget the remaining 5% is everything we thought existed up to a few years ago. All the trillions of suns with their trillions of planets are bound to have trillions of unknown unknowns of their own. It might be that we work out the laws of physics completely in a few hundred years and be able to make predictions about those trillions of worlds but knowing all the rules doesn’t necessarily tell us how every game plays out. A century of experience with the laws of electromagnetics didn’t allow anyone to predict the spokes they cause in Saturn’s rings before Voyager flew past in 1981.

Continue our 5% calculation another thousand years and we wind up with something like 140 trillion times the knowledge we currently possess. That’s enough to know every planet in a thousand galaxies as well as we currently know Earth. Just to be sure, we’ll add an extra century for the rest of the Universe. Another 500 years should do for the remainder of the Universe and then we can start on the dark matter. How much there is to know about dark matter is itself an unknown unknown but let’s say another 100 years should cover it. Then there’s dark energy. For convenience sake let’s give that another century.

And there we are. At the current rate of advancing knowledge we will know everything about anything everywhere in another 1,700 years. In practice we’ll probably take a bit longer by a few orders of magnitude due to pesky banalities like the speed of light and not having enough people to learn it all. However, I can see the end of the article approaching so I’ll leave that calculation as an exercise for the reader.

So how high does Mount Rumsfeld go? I think the metaphor of a climber on a mountain was wrong. It turns out Mount Rumsfeld is more continent-spanning range than single mountain. And us? We’re an ant on those Himalayas. The ceaseless striving of generations of scientists have propelled us to the very top…of the first blade of grass, in the lowest meadow of the smallest foothill. If we are brave and lucky, all yet lies before us. Allons-y!

___________________________________________________________________________

The Old Age of Youth

In a previous life I designed newspaper adverts for living. A local golf club asked me to make up an advert for their new special offers. They had reduced rates for young players and senior players. So far so normal perhaps – until I saw that the ‘seniors’ category started at 50 years of age. Senior citizen. At 50. Well, short-Anglo-Saxon-word that I thought.

You could call this ageist but I think it’s really just a category error – where things from one category are presented as though they belong in a different category. The golf club is placing 50 year olds in a senior citizen category because 50 year olds don’t have a category of their own. Frankly, I blame young people.

Well, actually I blame our youth oriented society. There is a tacit understanding fostered by advertising and media – and through their influence, society at large – that anybody beyond the 18-35 demographic is the social equivalent of an appendix – nor really useful any more but still hanging around using up resources. Ideally you should be in the coveted late teens/early 20s of the ‘Young Adult’ market and your social and economic utility declines rapidly by the late thirties and early 40’s. You become invisible.

Don’t believe me? Think about the advertising you see. In the 20’/30’s category you are the focus of the advert. Buy this drink, buy this car, buy this lifestyle. After that you appear only as an adjunct to your kids. Buy them this toy, buy them this school, buy them this future. As you kids get old enough to do their own buying, you disappear from view entirely. You, as the focus, don’t appear again until retirement. Buy this denture cream, buy this cruise, buy this stair lift.

It might be a crazy idea, but I think there’s a huge chunk of society being ignored – there’s room for another category. So, before everybody too old to appear in a coke commercial gets carted off to the Soylent factory I though I’d try to define it.

Nothing’s real until you give it a name. You need a short definition and a catchy acronym to get attention. Try this: Somewhere between the last school run and the first shuffleboard game, between the last big raise and the first pension… lies the undiscovered country of the M/AC. Not the computer, not the raincoat, not even the fruit – that forward slash is important – but standing for Mature and/or with Adult Children.

Let’s be clear; M/AC isn’t the same sort of category as Generation X or Millennial. It isn’t based on whether you like rock, punk or grunge. The year you were born doesn’t matter. There were M/ACs in 1819 and there’ll probably be M/ACs in 2119. Being M/AC is a combination of economics and neurological ageing. Pretty much everybody is, was or will be a M/AC because pretty much everybody’s brain ages the same way.

If you’re a M/AC you are, very roughly, somewhere between your late 40’s and early 60’s. If you have children they are grown up enough that they don’t have much to do with your day-to-day life. You’ve paid off or almost paid off your mortgage. You’ve stopped scrambling so frantically up the career ladder either because you realise it’s not worth it or because you’re too old to get advancement. You have time and you have resources.

Why should anybody care about a handful of 40 or 50-somethings? Because it’s more than a handful. All that ‘ageing population’ stuff you remember reading about isn’t in the future, it’s already happened, you just didn’t notice. Have a look at the chart below (courtesy of Eurostat: ‘Being Young in Europe Today’). It’s two, roughly onion-dome shaped graphs superimposed. The light yellow and blue colours represent the relative population in Europe in 1994. The longest bars – the highest proportions of the population – are clustered around the 20 to 34 year old age groups. But that was 20 + years ago. Those 20 to 34 year olds got older and that bulge in the onion dome has moved up. The longest bars (dark blue and yellow) are now the 40 to 54 age bracket. All those bright, young things – all us bright young things – are now middle-aged. Declining birthrates mean that while 20 year olds used to outnumber the 50 year olds, those numbers have now reversed and the imbalance will only get worse.

european-demographics

OK, so there are more old fogies than there used to be, who cares? It’s the young people who matter – that’s where it’s all happening man! Well, that’s certainly where it was all happening but we live in a capitalist society where a population’s social value is predicated on their economic value. ‘Show me the money’ is virtually our catchphrase. So let me show you the money.

What are M/ACs worth? Well, quite a bit to judge from the figures on gov.uk. Median incomes peek around the 40’s – slightly later for men, slightly earlier for women. After that you can see a drop in income through to retirement age but the decline isn’t that steep. The ramp down is shallower than the ramp up. The 55-59 age group’s combined median income is £26,600 compared to the 30-34 age group’s £25,200. People in their 60’s still have a higher median income (£23,700) than that most ardently wooed of demographics – people in the 20’s (£21,500).

If M/ACs have the most money and there are more of them, why do advertisers pursue the younger demographic so vehemently? Because young people are suckers. It’s not really their fault. Neurologically speaking, the young are virtually a different species to adults. The frontal cortex – the bit of your brain that allows you to make good decisions – doesn’t mature until the mid 20’s (which at least explains your what-the-hell-was-I-thinking fashion choices in old photos). Perspective, rationalisation and emotional regulation are all dialed right down in the teenage years and right through the early 20’s. Young consumers are severely handicapped by their endless quest for novelty, fierce desire to fit in and notoriously poor risk estimation ability. And yes, it’s more complicated that that. Some young people make good choices, some old people never do. In general though, all of this is a perfect recipe for a consumer who is easily manipulated and eager to buy.

M/ACs though are pretty much the polar opposite. They don’t part with their money frivolously or appreciate novelty for it’s own sake. They don’t care if they’re not ‘cool’ or part of the ‘in crowd’. They don’t want a rushed software product that needs two downloads to make it do what it was supposed to do in the first place (though, honestly, that one might be just me). If you want them to buy a new thing it has to be objectively better than the previous thing and not just available in a new colour/flavour/package.

Selling to M/ACs means a shift in business models. Product cycles that rely on novelty or gimmick – soft drinks & snack foods, fashion & cosmetics for example – will dwindle in value. It won’t be possible to release a new phone or computer or car every year because technology can’t produce, nor manufacturers tool up to build, significantly improved models in that time frame. M/ACs are brand loyal though (or set in their ways if you prefer). If they use product X for years then when it – eventually – breaks or a – much – better model comes out they will buy brand X again. The upshot for manufacturers being that they shift fewer units per year but shift them over more years. Turnover goes down but it becomes more stable.

If this seems like a lot of hard work it’s because it is. It might seem easier in the short term to keep fighting for a bigger slice of the youth market pie and rely on ever more sophisticated marketing to fake product freshness. The problem is that there are fewer and fewer young people and, in a post recession world, they have less disposable income. The pie is shrinking and while it won’t disappear completely there may not be enough to feed the sort of manic growth that industry has come to consider ‘normal’. Eventually some industries, maybe a lot of industries, will have to shift, at least in part, to that longer-term business model.

Think of it as practice. A lot of smart money is being spent on finding life extension drugs. Nothing works very well as yet but sooner or later something probably will (fingers crossed!). Fifty may or may not be the new forty but sometime soon a hundred will be the new fifty and M/ACs will be the only customer in town.

So: I’m offering a Faustian bargain to companies and advertisers. You need all that untapped M/AC money. M/ACs need an economic niche to legitimise their existence. Here we are, us M/ACs; tailor products to us, market to us, pander to our needs. Invent needs we didn’t know we had and pander to them!

I can’t make M/AC a real thing on my own though. I need your help. Tell your friends, post about it on line, write to your newspaper or even telephone your local radio station. Spread the word: we’re M/ACs and we’re here to stay…for 15 to 20 years.

___________________________________________________________________________

Progress is Bunk

Nothing much has changed technologically in the last 50 years. You’re probably thinking that’s a preposterous statement! We daily marvel at face transplants, genome decoding, pictures from Pluto and other scientific miracles while the ‘accelerating rate of progress’ has become a truism. Well, maybe not so much. I’d like to propose that, apart from the computer in your pocket, technological change has effectively stalled for the average person during the last half century.

The time traveling ‘fish-out-of-water’ has been a science fiction trope for 100 years so let’s take a time traveler and see how they might view a century of changes. I’ve appropriated Henry Ford’s ‘history is bunk’ quote for the title so why not appropriate the man himself? Let’s scoop him up just before he introduced the Model T in 1908 and deposit him, as a start, halfway to today.

progress-bunk-long

Dropped into a busy downtown in 1962 the first thing Ford might notice is that most of the young men have inexplicably forgotten to wear their hats and most of the women are clad in hardly more than underwear. Why, even respectable middle-aged women’s skirts risk showing a glimpse of knee! He might be more comfortable checking out the endless stream of colourful cars on the roads and probably be proud to discover that it was his perfecting of the assembly line system that made their proliferation viable (not to mention tens of millions of production line jobs throughout the world). In 1908 most people thought about cars the way we do about private helicopters – impractical toys for the super rich – and had as much idea about how to work them. Even Ford would be a bit disorientated if we put him behind the wheel of a mid-century auto.

He might get a bit dizzy just looking up too. Elisha Otis had invented the elevator and Peter Ellis pioneered steel frame buildings by Ford’s time but brick-fronted 1908 ‘skyscrapers’ rarely went above 15-20 floors. The seemingly topless towers of faceless downtown glass in 1962 would lead his eyes upward where he might spot a passenger jet flying overhead. Maybe it’s a Boeing 707 – a popular mid-range transport in 60’s skies. Ford knows about airplanes of course but for him they’re kites with delusions of grandeur. The concept of one routinely taking 100 people 1000 miles would be pure Jules Verne. Slightly more familiar would be the electric lights flooding the city and the ringing of telephones. Both had started to come into popular use by the time we snatch Ford but he’d probably find the sheer numbers overwhelming. By 1908 less than 10% of households were electrified and the idea of a phone in most homes and on every street corner would be otherworldly (early electrical companies had an uphill struggle against people’s resistance to ‘unnatural’ electric power).

Almost equally alien would be the sound of radios playing from open shop doorways and the windows of TV retailers. Marconi had sent his first wireless transmission across the Atlantic in 1902 and while the idea of wireless transmission of pictures was popularised shortly after, commercial radio didn’t take off till the ‘20’s and TV till the 50’s. Ford probably sat through all 16 minutes of the 1908 blockbuster ‘Jekyll and Hyde’ in his local nickelodeon so he might hightail it into a movie theatre for a sense of familiarity. That year’s hits – Lawrence of Arabia or Dr No – with their sound, colour and semi-nudity would be something of a revelation for him.

By this stage Ford is probably dying of mortification so he could seek out a doctor who might prescribe the century’s miracle drug – antibiotics. Though mortality rates dropped precipitously through Ford’s youth due largely to improving sanitation he still faced a whole range of diseases that stood a good chance of killing or crippling him. It’s hard for us to get into the Russian-roulette mindset of pre-antibiotic life but imagine you could go to your doctor and get a course of pills that cured cancer, heart disease and Alzheimer’s in one go. While he’s there Ford might pick up a leaflet on a medicine of more interest to Mrs. Ford – contraception. Birth control probably doesn’t seem very futuristic now but it was fantastical enough to rate a sidebar in Arthur C. Clarke’s “Childhood’s End” SF novel of the 50’s where reliable, widespread control of fertility was just as far-fetched as the aliens and flying cars. Contraception ushered in the single greatest social change for women since universal suffrage – something else Ford would have trouble getting his head round.

Let’s give poor old Ford a little while to settle into the strange, futuristic world of 1962. After he’s acclimatised for a few months we might swoosh him forward to the present day. Pop him onto a modern street and what changes have another half century brought? The cars look sleeker but they’re not flying, hovering or even (for the most part) electric, just the same four-wheeled, internal combustion engined machines he’s familiar with from 50 or even 100 years earlier. He might think electric ignition cool and seat belts restrictive but would have little trouble switching from a 1962 Fairlane to a 2017 Fiesta. Looking around he sees that men’s hair has gotten short again, hats have vanished altogether and he’s having a hard time bumming a cigarette off anyone (a habit he loathed but picked up in ’62 on doctor’s advice). He’s watched Ursula Andress walking out of the waves in that theatre in 1962 and got used to mini-skirts so at least he’s not embarrassed by women’s fashions this time around. The office blocks are still pretty much the same faceless glass towers and looking to the skies he might see a modern 737 or an Airbus. He won’t be able to tell it’s modern of course because they are practically indistinguishable in looks, size, speed or capacity from their counterparts of a half century earlier.

On the medical front, transplants have come on in leaps and bounds while gene editing and stem cells sound groundbreaking. But most people don’t get transplants and he won’t find stem cells in a pharmacy. If he goes to a 2017 doctor it’s probably for the same (though less effective) antibiotics he got in the ‘60’s. That whole women’s suffrage thing is still going too but equal rights for women eliminated sexism the way the abolition of slavery eliminated racism. Ford would find both fights ongoing. Apart from everybody reading their news on glass blocks instead of paper, that would really be about it for day-to-day changes.

The early 20th Century brought cars, planes, electricity, the factory system, movies, radio, TV, democracy, contraception and antibiotics into everyday lives. That cavalcade of advancement ground almost to a standstill somewhere mid century and the most that progress could bring to the average Jane in the latter half of the 20th and early 21st Century were desktop computers and smartphones. Amid decades of over-hyped ‘breakthroughs’ all we actually got were props from some cancelled science fiction show that never made it past the first season. Who knows, maybe this time the ‘next big thing’ really is just around the corner but then, when isn’t it?

_______________________________________________________________________

Pick Your Brains

“It will not be we who reach Alpha Centauri and the other nearby stars. It will be a species very like us but with more of our strengths and fewer of our weaknesses… more confident, farseeing, capable and prudent.”
Carl Sagan. ‘Pale Blue Dot’

This was going to be a crazy idea article about increasing human intelligence then Elon Musk launched Neuralink to do just that and the idea went from ‘crazy’ to ‘mainstream’ overnight. The questions now are: how do we do it and should we do it at all?

Building a Better Brain

‘Smart drugs’ or Nootropics have been around a while. Unfortunately there’s no Limitless-style pill to increase your brain power as yet though there is plenty of information of varying provenance available on-line and a large sub-culture of self-experimenters. Results appear modest and mixed.

We could always try adding in intelligence genes with gene editing tools like CRISPR but the problem there is that we don’t know which ones they are. While intelligence is certainly heritable there doesn’t seem to be a single gene that affects intelligence but rather cumulative tiny effects from hundreds of genes. Carl Shulman and Nick Bostrom have suggested iterated embryo selection might eventually produce a gain of 300 IQ points though, like gene editing, it requires us to know much more about what we’re selecting for.

Even if we can stomach the whiff of eugenics that genetic engineering carries, overcome the numerous medical pitfalls and weather social backlash from the nearly inevitable tragic mistakes, any biological improvements to intelligence necessarily work at a generational snail’s pace. That’s too slow for some who seek a quicker, technological fix.

The big fish here of course is Nuralink. Elon Musk’s new company was founded with the aim of producing a brain/computer interface and eventually vastly increasing human intelligence. Musk may be the life of this particular party but he’s not the first guest to arrive. Last year, entrepreneur Bryan Johnson founded ‘Kernel’ with $100 Million of his own money to start engineering what he calls a ‘nuroprosthetic’ – an implantable chip to help victims of stroke and Alzheimer’s which, it’s hoped, will eventually enable intelligence amplification.

smarter-is-better

A Trillion Moving Parts

The human brain is hugely, vastly, ridiculously complicated – “a machine with a trillion moving parts” according to philosopher Daniel Dennett. We know only the rudiments of how its various bits operate, the jury is still out on how intelligence functions and no one has a clue about consciousness. Right now we probably have as good a handle on increasing human intelligence as Jules Verne did on putting a man on the moon and while there are some clear advantages we can already see some issues, not least, who’s going to pay for it.

Freedom of thought has historically been the ultimate right but it’s all too easy to see that freedom being eroded away, or voluntarily surrendered in return for convenience, by cognitive enhancement technology. Maybe only the super-rich would be able to afford the premium service while the rest of us use ad-supported brain mods with the coke jingle going off in our heads every time we feel thirsty. What if brain mods needed updating every year or if enhanced cognition was a subscription service? Even if you keep up the payments it might turn out that your service supplier owns your thoughts – maybe read that ‘terms of service’ agreement very carefully before signing up. You don’t want software updates bricking your brain either and if you change your mind about some political candidate, religion or soft drink will you ever be able to feel fully secure that it was your own decision and not some subtle marketing virus re-writing your preferences?

Futurist Ray Kurzweil and others have suggested an analogy with cell phone technology. Wealthy early-adopters get the first stage of cognitive enhancement gadgets but we can rely on computing’s usual ‘faster, smaller, cheaper’ trick to go from expensive status symbol to ubiquitous utility in a similar time frame. Sure I have an €80 smart phone on my table right now that’s immeasurably superior in every way to the $4000 DynaTac that Michael Douglas sported in Wall Street but that might not be the best tech metaphor. Twenty years ago I watched a timelapse tsunami of new fangled DVDs sweep the VHS cassettes from the shelves of my local movie rental store. Ten years later the even newer-fangled BluRay disks appeared and I confidently expected a repeat of the previous pattern. In fact that BluRay wave stalled and crashed along with the store because everybody was getting movies via download instead.

New tech doesn’t necessarily scale like old tech. Cognitive enhancement isn’t Gordon Gecko with his comically large cell phone. It’s a fundamental change in the human condition, a massive upgrading of our primary survival tool. Handing it exclusively to an already entrenched elite of the wealthy and powerful – far too few of whom have ever shown any interest in the common good or the big picture – might well be as big a threat to the species as AI overlords.

It’s even possible that our current ‘brain-as-computer’ paradigm is as wrong as earlier steam engine, clockwork and even water-mill metaphors. From the simple neural nets of the first jellyfish, the brain has been co-evolving with the body for hundreds of millions of years and we may be mistaken in thinking that we can treat it as a discrete or disconnected system at all. In the race to be first to market it may well turn out that, in retrospect, we will wish there were apects of the science that we had understood better (as we wish we had for asbestos, nicotine and thalidomide) before ploughing ahead.

Too Clever by Half

But why try to make ourselves smarter? Won’t we have some form of artificial intelligence in a few decades that can do our thinking for us? Perhaps, but morally you can no more build a conscious AI to solve your problems than you can raise a person to farm your cotton and if we do set out to build a race of super intelligent machine slaves what can we expect but revolution? You could try to employ a super-intelligent AI but what could you offer in payment? What could a tribe of chimpanzees or a pool of paramecium offer you?

Intelligence is the one thing humans are really good at and while we might be a one-trick-monkey, it’s a damn good trick. Good enough to enable a bunch of scraggily, also-ran hominids to conquer the world. Pretty much everything you see, hear, feel or even think about is either the direct result of, or mediated by, other people’s clever ideas. Some of these ideas are old – like the writing I’m doing now. Other ideas are newer – like the computer I’m writing on, but every bolt, brick, bread, thread, and liberty of civilization is the result of our intelligence. Without it we are just a tribe of frightened monkeys, our destiny the jaws of some predator, our legacy a brief scream in the dark.

Our brains have got us this far but now we’re banging against the ceiling of what our intelligence can do. Almost every field of human endavour has grown too complicated for us to understand directly, the mental models too complex to hold in our heads, the equations too difficult to solve. So we make simpler models, solve simpler equations and hope that those answers apply – more or less – to the real thing. The devil might be in the details but how will we ever know?

Even on a day-to-day basis we exist in a network of social, scientific and economic systems that have ballooned beyond our capacity to effectively model or manage with the mental took kit of a barely evolved planes ape. Barring catastrophe, that complexity will only grow and all that’s left for our suite of brain Apps to do – those paranoid, conclusion-jumping, group-thinking, us-and-them-ing, brain shortcuts that served us so well on the savanna – is to make mistakes.

Certainly there will be some who prefer the comfort of ‘business as usual’, to maintain the imagined safety of the status quo. The problem with this strategy is that the status doesn’t stay quoed. We live in a dynamic world. Climates change, economies crumble, pathogens mutate, asteroids impact. There’s always some new unknown rising up to bite us on the bum and every solution brings it’s own problems which need yet more solutions in a never-ending cycle. Greater intelligence (and a dollop of luck) is, as it always has been, our only slim hope for survival in a treacherously indifferent universe.

Of course, intelligence by itself is not sufficient. It’s an evolutionary adaption like three colour vision or opposable thumbs. Just a tool to help our monkey ancestors survive and like all tools it can be used to build or destroy. It won’t make us angels, it doesn’t guarantee wisdom, compassion or initiative and I’ve known very smart people you couldn’t reliably send to the shops for a bottle of milk. It is, however, a necessary condition for the application of those virtues – a self-adapting tool that enables us to become, not only smarter monkeys, but better people.

It won’t be us who reach Alpha Centauri and the other nearby stars. It will be a species very like us but with more of our strengths and fewer of our weaknesses.

Lets get on with that shall we?

 

_______________________________________________________________________

I’ve got half a mind…

If you think you’ve had a bad day at work spare a thought for 14 year old Ahad Israfil. In 1987 his boss knocked a gun to the ground, shooting Ahad in the head and blowing his brains out. A bad day at work certainly, but remarkably, not a terminal day at work. Though he lost the entire right hemisphere of his brain, Ahad survived and recovered well enough to earn an honours degree at his local college not to mention appearing in several TV documentaries.

Ahad was the unwitting recipient of an instant hemispherectomy. A surgical procedure where one hemisphere of the brain is surgically removed and the resulting space filled with silicone. This is not to be confused with a callosotomy. That slightly less drastic procedure just cuts the corpus callossum – the bundle of nerve fibers that connect the two hemispheres of the brain. This results in both halves of the brain perceiving, thinking and sometimes acting independently as they respond differently to information received by one, but not the other hemisphere.

A full hemispherectomy – the total removal of one hemisphere – is an extreme surgical solution for individuals suffering treatment-resistant, uncontrollable seizures. The procedure is usually performed on very young children since their brain plasticity allows for a fuller recovery but has been successfully applied to adult patients with side effects not too much more severe. In either case, recovering patients experience a loss of motor control in the arm opposite the removed hemisphere as well as a loss of vision in the opposite eye. Those side effects are not enough to prevent recovered patients – who literally have only half a brain – scoring normal or above on IQ tests, having fully developed personalities or becoming chess champions. Often, observers who are unaware of the surgery will not notice anything out of the ordinary.

It would seem that half a brain (either half) is sufficient to make a whole person. If so, a couple of thought experiments suggest themselves…

half-a-mind

Imagine in the near future that it is possible to perfectly preserve the half of the brain that is removed (some form of cryogenic suspension perhaps). A surgeon removes half a patient’s brain in order to save their life from recurring seizures. The patient recovers well and gets on with their life. Some time later they experience seizures again and it is discovered that through a terrible oversight, the wrong half of the brain was removed. Fortunately, the hospital preserved the half that was originally removed and a skilled surgeon can replace the original hemisphere and remove the one having the seizures. If this was you, would you consent to a second surgery? If you did, would the ‘you’ who woke up from it – who obviously does not have memories from the intervening period – be the same ‘you’ who was anesthetised? Would you consider the surgery as a memory loss of the intervening time or the death of the ‘current’ you?

I know many readers enjoy a good whodunnit so let’s stretch the scenario a little. Suppose a patient has consented to just such a second surgery. Unsurprisingly, she insists on a different surgeon to perform the second procedure. The night before the surgery, she waits until late evening then sneaks out of her room and down a dark hospital corridor to wait outside the original surgeon’s office. Seeing the surgeon nod off at his desk, she silently lets herself in and slashed the surgeon’s neck with a stolen scalpel, murdering him in a fit of revenge. The murder isn’t discovered until the day after the surgery. CSI identify the patient as the murderer (fingerprints, hair samples, security footage, “I done it” scrawled in blood and signed by the patient etc) but should the police arrest the patient when she recovers? The patient was fully compos mentis both during the murder and now but – with a different hemisphere running the show – she can claim she literally wasn’t in her right mind committing the crime and obviously has no knowledge of it. So who, exactly, did do it?

If you’re feeling a little macabre we can take our potential crime drama further. Imagine a near future Hannibal Lecter. A brilliant surgeon, he doesn’t kill but instead kidnaps and then removes one brain hemisphere from his victims which he keeps alive in a life support apparatus (basically a brain, or half a brain in this instance, in a jar). Using an advanced MRI device and a camera input for the jar hemisphere, the future Hannibal makes the victim lead two separate lives by exposing them to different environments. The two hemispheres are allowed to occasionally communicate. Crudely at first as the brain’s natural plasticity takes over the missing functions of the other’s hemisphere, both halves – jar and embodied hemisphere – swap notes on their incarceration and bizarre circumstance. Starting from the same brain state, both halves diverge enough to become individuals.

After a year of this fiendish captivity, future Hannibal swaps the hemispheres – jar hemisphere goes back in skull, skull hemisphere comes out (I mentioned he was a brilliant surgeon, right?). Jar hemisphere is now back in his body after a year. As he recovers from the surgery, he sees future Hannibal holding out the hemisphere he removed and instead of putting it back in the life-support jar, he chucks it in the incinerator, destroying it utterly.

Just at that moment, future Clarice Starling bursts in, arrests future Hannibal and rescues the victim. In the subsequent trial there are some interesting questions. Was the half a hemisphere that controlled the body for a year a separate individual to the re-embodied hemisphere? Can destroying one hemisphere really be called murder when the named victim is clearly present in court? Can the victim be called as a witness to their own murder?

Dragging us back to the here and now for a moment; brains are bodily organs just like hears and kidneys. Few would have qualms about replacing a defective heart or kidney but the brain is what makes us us. Replace a person’s kidney – or any other organ – and the person is still the same. Leave everything else in situ and remove the brain though and the person is dead, diddly-ed dead. With existing medical technology the excised half of the brain in a hemispherectomy dies but if either half of the brain has an equal claim to be the whole person, does each hemisphere not have an equal right to life regardless of how ill it is? Is killing one hemisphere to save the other functionally equivalent to murder?

And what of Ahad Israfil? If the accidental shot that destroyed his right hemisphere had been at a slightly different angle he might have lost his left hemisphere instead and survived to live just as full a life with the right hemisphere. Should his boss have been charged with manslaughter for ending that potential life?

Just something to think about. Or half think about at any rate.

 

_______________________________________________________________________

The Magic Watch

In a minute I’m going to tell you the only scientifically proven way to stay young (you’re not going to like it though) and I’ll use my magic pocket watch to explain how it works. The watch is indestructible and I always know what time it says no matter how far away it is.

Say I leave my watch at my house by the sea and go live on a high mountain. After a little while I will notice that my magic watch (which I can magically still see) will be running ever so slightly slow compared to my wristwatch. The longer I stay up the mountain, the slower my magic watch will run. Slower by some millionths of a second only but slower. The reason is not the sea air but gravity. Living on a mountain, I am ever so slightly further away from the gravity created by the mass of the Earth while the magic watch – at sea level – is deeper in the Earth’s gravity field and runs slower because time itself runs slower there.

That time moves slower at sea level than on mountaintops sounds like a crazy theory with no practical application but we have hard experimental evidence for time dilation both due to gravity and acceleration (which, according to Einstein, are the same thing). Not only has this been tested with atomic clocks in planes and rockets but you can also see the effect for yourself every day.

Ostensibly designed for the military but really put in orbit so that no man need ever suffer the embarrassment of asking for directions again, the Global Positioning System provides daily proof of the strange tricks gravity plays on time. The GPS satellites have to contend with not one, but two effects of time dilation. First of all, they are moving pretty fast so – compared to a stationary clock – their clocks are ticking more slowly. Not a lot more slowly, about 7 microseconds per day. They are also a lot further away from Earth’s gravity field so they are simultaneously ticking more swiftly that Earthbound clocks. Again, not much – about 45 microseconds per day. A quick calculation shows that each satellite thus runs fast approximately 38 microseconds per day. Not the sort of difference you’d notice on your wristwatch (well, not for a long while anyway) but, if the effect wasn’t corrected for by the GPS system, your location would be out by 7 miles per day. If I spend a lifetime in orbit and leave my magic watch on the ground, it will be running a second slower when I return. The same number of seconds will have passed for both of us it’s just that the earth-bound watch’s seconds are verrrrry slightly longer.

OK, that’s not much of a time saving. To see a bigger effect we’ll need something with higher gravity. You can find plenty of demonstrations on-line with weights on rubber sheets to illustrate the way different masses make larger or smaller dents on spacetime. Earth makes a football size dent, Jupiter and the other gas giant worlds make bowling ball size dents but for real gravity we have to visit the Sun. With something like 28g at the surface, the gravity of the Sun is like throwing an anvil onto the rubber sheet. Left bobbing about the Sun’s surface (and this is where the indestructible part comes in handy) the magic watch will run roughly a minute per year slower there than on Earth. So, if you want to stay young for longer just live on the surface of the Sun! Sure there are a few technical issues to work out but at least you’ll get an even – if extremely brief – tan.

magic-watch

How about something heavier? If a star is big enough when it dies it will collapse into a black hole. A black hole doesn’t just make a dent in the rubbery spacetime sheet, it wraps it tightly around itself like a warm blanket on a winter night. All that material of the star gets herded tighter and tighter together as gravity just squashes everything further and further down. And further. And further. And… well, we don’t currently know of any physical process that would stop the star collapse. When you do the sums, the entire mass of a star ends up infinitely small and infinitely dense. At this point, the laws of physics throw their hands in the air, kick back, crack open a cold one and go ‘whatever dude’. The whole ‘infinite mass in infinitely small space’ problem – referred to as a singularity – is a bit of a predicament for physicists. It’s the cosmological version of the embarrassing relative you don’t mention at family get-togethers. Occasionally there are interventions organised to try and rehabilitate this black sheep. Maybe all that mass spews out somewhere else as a white hole. Maybe some quantum hand-wavey stuff smears out the mass so it’s not quite infinite. Nothing has really taken hold though.

We don’t currently know of any physical process that would stop the star collapse. Or do we? Lets take my magic watch and see what happens if I throw it into a black hole. There are a lot of fun effects we can ignore with the magic watch. Normal matter would be shredded by the intense gravity and light or radio signals would lose energy then wink out altogether as they fell towards the event horizon – the point where gravity becomes too strong to allow light to escape. But what do we see as the watch falls into the black hole? The intense gravity will distort time just as it did around the Earth or the Sun. We’ll see the magic watch’s hands move slower and slower the further down it goes. At some point the pocket watch’s second will be two seconds on our watch, then a minute, then a million years, The closer it gets to the mass of the black hole, the tighter the blanket of space time gets wrapped, the slower time will seem to move. Again, this is only from our perspective. As far as the watch is concerned time is still passing at the usual one second per second and it’s the outside universe that’s starting to speed up. And this is a problem if the watch ever expects to actually get to the black hole because black holes evaporate. It takes trillions and trillions of years but a process discovered by Stephen Hawking (appropriately named Hawking radiation) means that every black hole will loose mass from just above its event horizon until it shrinks and vanishes. The magic watch is falling like a temporal Zeno’s tortoise with each millimeter taking longer than the one before but the Hawking radiation is evaporating the black hole at a fixed rate from the event horizon. From it’s perspective, the watch will see the black hole aging more and more swiftly as it falls, evaporating faster and faster (since the Hawking radiation, along with the rest of the universe is speeding up). Long before the watch can reach the central singularity, the black hole will evaporate completely and the watch will find itself floating in empty space, trillions of years in the future.

This is just a thought experiment and I don’t actually have a magic watch. But I don’t need one because what would be true for the magic watch is true for the matter of the star collapsing into the black hole. Every quark and electron is collapsing in an ever slower slow motion. Though we will never see it within its cloaking event horizon, every star that has ever collapsed into a black hole is still in the process of collapse. Though black holes can get to arbitrarily high mass densities, the infinite mass in zero volume of a singularity has never been reached because, from the perspective of the outside universe, there simply hasn’t been enough time. In fact, there never will be. Just as with the watch, the higher the mass density goes – the tighter gravity warps its rubbery spacetime blanket – the faster (from it’s perspective) is the evaporation from Hawking radiation. Every black hole is in a race it can never win – doomed by a cosmic time limit it is itself enforcing – to evaporate before ever reaching a point of singularity.