Max Tegmark – Life 3.0 (2017) Review

Life3


The decades after WW2, the 1950s and 1960s, was a time when many people found themselves rather stunned by the leaps of technological achievement and the rapid changes in society that had taken place. First there was radio, television, automobiles, telephones, and then there was the atom bomb and putting a person on the moon. This was the time when the modern genre of science fiction really took off. Back then, many speculated that by the year 2000, we would surely be living on the Moon or Mars.

Looking back on that now, we may feel that technological progress hasn’t delivered. However, we are currently living through another period of great changes and they have to do with information technology. Unsurprisingly, science fiction is in a revival alongside these developments. Looking at mainstream culture, we see that science fiction appeared everywhere (cinemas, Netflix series, and many novels), and popular writing and journalism on science is hot. It’s just the focus that is no longer on rockets but on information technology and robotics (and in the near future will be on nanotechnology, genetics and quantum computing).

All these cultural artifacts – movies, novels – are mostly “laypeople” speculating about the future. Some of their products are intriguing (like the TV series Westworld), others are quite mediocre (like the movies Lucy and Transcendence). Max Tegmark is instead a researcher at MIT and in Life 3.0 he shows us what scientists are thinking about when they daydream about the future. His book is specifically about the rise of artificial intelligence and how seriously we should take this (hint: very seriously, he says).

For Tegmark, this is an extremely relevant question. Not because he thinks that robots will rise up tomorrow to conquer us, but because the challenges that actual, competent AI poses to human civilization are so immense that they might take decades to properly answer. But what is competent AI? Ever since reading Daniel Dennett’s From Bacteria to Bach and Back, I realize how difficult it is to define intelligence. What is intelligence? Does it involve awareness? Self-awareness? What is that? Can computers have it? Even if we don’t have answers to these questions, AI that can “do a lot” will come, and will have an immense impact on our society. Think about algorithms predicting everything about you. Think about loss of jobs, the undermining of democracy and the economic system. Those kind of problems.

A major point that Tegmark makes is that we can create programs that can solve problems, without truly knowing how they solve problems. For instance, researchers can build computer memory in an imitation of the neural network in our brain, and we see that that imitation neural network can learn if you feed it information, even if we don’t precisely know how. Those imitation networks now exist in self-driving cars and photo recognition software. Another thing is evolutionary algorithms. We can give a computer an end state to achieve, as an imitation of natural selection, and then let the computer reiterate attemps to reach it, each time changing its attempt to work towards solutions that work better, until it can compute a solution to the problem. And we don’t have to – or can’t – understand how it does it. Evolution and biology are smarter than we are, but we can let it work for us to design stuff that we partly understand. A favorite example of this is the AI AlphaGo beating the world champion Lee Sedol in the game go. It learned strategies that humans did not understand.

Robots

Tegmark raises many interesting questions: if AI judges have less bias than human judges, would we still accept AI judgements? If a lifelike simulation can be rendered of you committing a crime, would you accept less privacy to get a strong alibi? Would you let an AI pay humans for work that it wants done? He doesn’t have the answers, but still holds out hope for some utopian jobless society where people have universal income but still find meaning and happiness in social communities. It sounds awfully simplistic and I’m not buying it (yet).

I’m having trouble following Tegmark when he talks about AI reaching “human intelligence”, whatever that means. At that point his narrative becomes frustratingly vague and unspecific. So much of his narrative rests on assumptions that AI can gain some sort of general human intelligence and that it starts formulating its own (sub)goals, but we rush over this so fast that it still sounds like a fantasy idea. Tegmark also seems fond of the singularity idea of Ray Kurzweil and Vernor Vinge and sees it as an inevitability that a computer starts redesigning itself at one point to become smarter and so start a cyclical intelligence explosion. Feels like muddy philosophical ground to base on a whole non-fiction book.

I also think that Tegmark is just not a very good writer. He sounds like he is reading from powerpoint slides and his argumentations feel forced or incomplete. He slaps conclusions on topics that he barely touched and then enters side-stories that are halfway relevant. Many chapters start with sentences like “We just explored the issue of […]” although we barely did, and “Let’s now explore […]”, but lacking a logical progression. These problems also have to do with the way the book is structured. Some essential chapters about consciousness and goals are all the way at the back, but could have allayed many of my annoyances.

Many visions for the near future are brought up, but they all feel like half-baked, inconsistent attempts at science fiction daydreaming, with many ideas that don’t strike me as realistic at all. They remind me of futures of the caliber such as we see in the Hunger Games series or Ready Player One, which are closer to metaphors than to actual serious attempts to predict the future. It all sounded either childish or divorced from actual human behavior. To “romance of the future” clearly needs some sociology.

The best chapter by far, however, is the one titled “our cosmic endowment: the next billion years and beyond“. Here we explore the absolute limits of life in time and space. Although I don’t think it is a necessary or even logical chapter to put into this book, it has many mind-blowing ideas. If you like SF authors Stephen Baxter or Olaf Stapledon, you’ll like this one. Tegmark’s forte as a physicist and passion for big ideas shines through and it doesn’t have the tortured logic of other chapters.

So, I am disappointed. I expected a lot more about “being human in the age of AI”. What I got instead was a too simplistic overview of “intelligence” so that we still don’t have a good idea about how a computer might “think”, another repeat of the Singularity idea and a lot of far future fantasy. For all the gaps in between, for an answer to what it means to be a human in the age of AI, I’m afraid that we need to return to the hundreds of films, tv series and books of what I dismissively called layman artefacts. Only there is the human condition truly explored in scary futures.

Advertisement
This entry was posted in Books, Non-fiction and tagged , , , , , , , . Bookmark the permalink.

3 Responses to Max Tegmark – Life 3.0 (2017) Review

  1. Bookstooge says:

    “At that point his narrative becomes frustratingly vague and unspecific”

    Yep, like almost every other proponent of AI.

    I consider it a pipedream and philosophers stone. Much like alchemy and turning lead into gold, AI is something that just isn’t going to happen.
    Now, when I say AI I’m talking about a sentient thinking machine.A well coded computer or program is not an AI. Considering all the problems I have with just my laptop and windows10, I don’t see humanity being smart enough to give rise to machine intelligence…

    Liked by 1 person

    • I also have big doubts that this will ever happen. Even if we were to simulate a human brain down to the atom, that sounds like something terribly inefficient, but if you design an AI mind in another way, how can we ever be certain that it will be “sentient”? The Turing test doesn’t prove anything, except that humans can be fooled into anthropomorphizing stuff.

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s