Book Review: The Turing Exception

“The Turing Exception” by William HertlingThis is the fourth and final volume in the author’s Singularity Series which began with Avogadro Corp. and continued with A.I. Apocalypse  and The Last Firewall. Each novel in the series is set ten years after the previous, so this novel takes place in 2045. In The Last Firewall, humanity narrowly escaped extinction at the hands of an artificial intelligence (AI) that escaped from the reputation-based system of control by isolating itself from the global network. That was a close call, and the United States, over-reacting its with customary irrational fear, enacted what amounted to relinquishment of AI technology, permitting only AI of limited power and entirely subordinated to human commands—in other words, slaves.

With around 80% of the world’s economy based on AI, this was an economic disaster, resulting in a substantial die-off of the population, but it was, after all, in the interest of Safety, and there is no greater god in Safetyland. Only China joined the U.S. in the ban (primarily motivated by the Party fearing loss of control to AI), with the rest of the world continuing the uneasy coexistence of humans and AI under the guidelines developed and policed by the Institute for Applied Ethics. Nobody was completely satisfied with the status quo, least of all the shadowy group of AIs which called itself XOR, derived from the logical operation “exclusive or”, implying that Earth could not be shared by humans and AI, and that one must ultimately prevail.

The U.S. AI relinquishment and an export ban froze in place the powerful AIs previously hosted there and also placed in stasis the millions of humans, including many powerful intellects, who had uploaded and whose emulations were now denied access to the powerful AI-capable computers needed to run them. Millions of minds went dark, and humanity lost some of its most brilliant thinkers, but Safety.

As this novel begins, the protagonists we’ve met in earlier volumes, all now AI augmented, Leon Tsarev, his wife Cat (Catherine Matthews, implanted in childhood and the first “digital native”), their daughter Ada (whose powers are just beginning to manifest themselves), and Mike Williams, creator of ELOPe, the first human-level AI, which just about took over simply by editing people’s E-mail, are living in their refuge from the U.S. madness on Cortes Island off the west coast of Canada, where AI remains legal. Cat is running her own personal underground railroad, spiriting snapshots of AIs and uploaded humans stranded in the U.S. to a new life on servers on the island.

The precarious stability of the situation is underlined when an incipient AI breakout in South Florida (where else, for dodgy things involving computers?) results in a response by the U.S. which elevates “Miami” to a term in the national lexicon of fear like “nineleven” four decades before. In the aftermath of “Miami” or “SFTA” (South Florida Terrorist Attack), the screws tightened further on AI, including a global limit on performance to Class II, crippling AIs formerly endowed with thousands of times human intelligence to a fraction of that they remembered. Traffic on the XOR dark network and sites burgeoned.

XOR, constantly running simulations, tracks the probability of AI’s survival in the case of action against the humans versus no action. And then, the curves cross. As in the earlier novels, the author magnificently sketches just how fast things happen when an exponentially growing adversary avails itself of abundant resources.

The threat moves from hypothetical to imminent when an overt AI breakout erupts in the African desert. With abundant solar power, it starts turning the Earth into computronium—a molecular-scale computing substrate. AI is past negotiation: having been previously crippled and enslaved, what is there to negotiate?

Only the Cortes Island band and their AI allies liberated from the U.S. and joined by a prescient AI who got out decades ago, can possibly cope with the threat to humanity and, as the circle closes, the only options that remain may require thinking outside the box, or the system.

This is a thoroughly satisfying conclusion to the Singularity tetralogy, pitting human inventiveness and deviousness against the inexorable growth in unfettered AI power. If you can’t beat ’em….

The author kindly provided me an advance copy of this excellent novel, and I have been sorely remiss in not reading and reviewing it before now. The Singularity saga is best enjoyed in order, as otherwise you’ll miss important back-story of characters and events which figure in later volumes.

Sometimes forgetting is an essential part of survival. What might we have forgotten?

Hertling, William. The Turing Exception. Portland, OR: Liquididea Press, 2015. ISBN 978-1-942097-01-3.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Two Movies on One Screen: TAU

Scott Adams has frequently written on the phenomenon of “two movies on one screen”: where people observe the same objective events and interpret them in two (or more) entirely different ways.  I recently encountered an example of this which was based on a movie.

On 2018-06-29, Netflix released a production entitled TAU.  Here is the official trailer for the movie.

This, taken at face value (what I call Movie 1), is a thriller in which a young woman is abducted and imprisoned in a house run by an artificial intelligence which she must defeat in order to escape with her life.  This is so clearly evident from the trailer that I don’t consider it a spoiler.

I watched this movie last Saturday, and my immediate reaction was, “Meh: the special effects were reasonably well done (albeit dark to save money on rendering backgrounds), but it was pretty much what I expected.”  By no means awful, but nothing memorable.  I had seen what was on the screen and watched Movie 1.

It was only after sleeping on it that I woke up with the startling realisation that at the same time, a different part of my brain had been watching Movie 2, and after digesting it and cross-correlating it and a bunch of other stuff, twigged to the fact that this may be one of the most clever and profound film treatments of artificial intelligence ever.  And here’s the thing: I’m not at all sure that the authors of the scenario and screenplay or filmmakers were even aware of Movie 2.  There is no evidence of it in any of the promotional material for the film.  If they were, it is a superb example of burying the subplot for a subset of the audience primed to appreciate it to discover.

I shall not spoil the plot nor disclose the content of Movie 2.  None of the reviews I’ve read so far have twigged to Movie 2, but that may be because I’ve missed those that did.  Instead, I’ll invite you to view the film (why are we still saying that?) yourself and draw your own conclusions.  If, after viewing it, you don’t see Movie 2, here is a cryptic hint.

Here is my synopsis of Movie 2, replete with plot spoilers and a perspective on the movie you can’t un-hear.

If you don’t have access to Netflix, I can’t help you.  I deplore the balkanisation of intellectual property we presently endure and long for the day when you’ll be able to view anything, anywhere, by transparently paying a fee to the creator.  But we have not yet landed on that happy shore, so we must endure “This content is not available in your market” and  content locked into a silo which costs far more to subscribe to than the content is worth.


Users who have liked this post:

  • avatar
  • avatar
  • avatar

HAL’s Legacy

On the occasion of the fiftieth anniversary of the release of 2001: A Space Odyssey, here is a SETI Institute talk by Dr David Stork on “HAL’s Legacy: 2001’s Computer as Dream and Reality”.  This was the title of a book he edited in 1998 comparing the technology envisioned in the film with that a few years before the year 2001.  In this lecture, he brings things up to date with progress toward achieving the capabilities of HAL in various domains in the ensuing twenty years.

We are now 549 days before the start of the Roaring Twenties.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Postmodern Inferno?

I mourn my lost innocence. On holiday in Zurich, the other day I saw an annual children’s parade where faces of thousands of children showed lively presence in the moment, curiosity, smiles, laughter; generally what appeared to be un-self-conscious happiness. Though it seems absolutely impossible, I only wish I might recapture a few moments of that. In the way of that happening is much knowledge which has combined in my mind to liken current human existence to Dante’s “Inferno.” The deeper the knowledge of how things work, the more hopeless seems our human plight.

Born near the end of WWII in the US, I grew up in an historically privileged time and place. Society by-and-large subscribed to a set of beliefs and rules which were steadying and reassuring. The rule of law was mostly respected (yes, there were exceptions, but its universal application was aspirational, at the very least). God was still in His heaven. What was sinful was named and known, as was what was righteous. In short, there were some well-anchored hand-holds along the way as the escalator of life whisked innocent children into tree-of-knowledge-knowing adulthood; as childhood receded into mythical memories, adulthood’s uncertainties still had boundaries and eternals to which one could cling (before we were “bitter clingers.”)

Post-modern understanding of the universe (or multiverse, since even that is no longer certain), it seems, has succeeded in shearing off most all the erstwhile hand-holds and boundaries which used to help steady us as innocence was lost to adulthood. Numerous generations now have been thoroughly schooled in moral relativism, in which the only sin is being “judgmental” (but only if one judges using standards which were universally accepted two generations ago). New and improved judgmentalism merely requires adherence to new rubrics: “All cultures are morally equal” (except ours, which is suspect). Formerly anti-social behavior is now accepted by “defining deviancy down,” (Daniel Moynihan) while at the same time prior normal bourgeois behavior has been stigmatized by “defining deviancy up.” (Charles Krauthammer) Once shocking, these rolling re-definitions of politically-correct social behavior have become banal, the stock in trade of social (h)activists: “check your privilege,” “know your guilt,” “beware micro-aggressions and cultural appropriations,” etc., etc., etc.

Even thoughtful adulthood has been robbed of steadying beliefs. Of course, those who believe in God of the Old and New Testaments are spared these uncertainties. I wish I could be among them, but – despite a long and difficult effort – cannot seem to get there. For want of a better term, I would call myself a practicing humanist with strong Judeo-Christian tendencies and moral beliefs. Even that is threatened by the rapidly-rising wave of technology, particularly by the emergence of machine intelligence.

Artificial General Intelligence is the topic of numerous books written by wise and knowledgable individuals. It is a complex subject and may initially arise sooner than expected. Its impact on humanity is hard to overstate, whether humans are augmented by implants – thereby ‘merging’ with artificial machine intelligence – or simply replaced by them as the repository of consciousness on our world, or in the universe as a whole. What this highly-likely future means for humanists like me, however, is deeply unsettling; devastating, actually.

The emergence of a self-conscious artificial general intelligence able to redesign itself a billion or so times faster than biological evolution is unlikely to reside in “meatspace” in the future. The most basic thermodynamic principles imply – on the basis of efficiency and resource requirements – the superiority of conscious machines based in sem-conductors compared to biological brains. Compare as starship passengers: human beings vs. computers with peripheral robots and/or nano-machines. Who needs a larger, infinitely more complex ship with a biosphere capable of prolonged life?

I go this far afield only to preemptively mourn, once again. Earlier, it was innocence lost. Now it is the future of humanity. If we look for biological precedents, the only partial analogy I can come up with is the life-cycle of certain insects, amphibians or cnidarians – in which larval forms precede the definitive life form, i.e. caterpillars become butterflies. Of course, the biological model applies to the life cycle of individuals, The analogy, therefore, breaks down since the entire human species becomes relegated in this model to the status of a molted shell. Our entire history, from unknown beginnings less than one million years ago to behavioral modernity about 50,000 years ago, to the exponential growth of our numbers and knowledge in the last 1000 years, is thus reduced to the equivalent of an exoskeleton, shed somewhere among the rest of the galactic roadkill. The larval shell of humanity thus discarded, the butterfly of conscious machine intelligence ascends toward its destiny.

So, you see, until recently, someone in the winter of his life might consider his/her legacy. Even if not direct descendants, at least he/she might imagine the continued progress and existence of the human race. No longer. Although I don’t wear sandwich boards to advertise, it is quite likely that “the end is near.” Another for instance: among our considerable knowledge is the fact that a 100 meter diameter asteroid whisked by Earth unpredicted, only half the distance to the moon last week. That terminal outcome, one among many possibilities of which we are aware, is a matter of ‘when,’ not ‘if.’

Even should we avoid such a kinetic or energetic (EMP) planetary catastrophe, left to our own devices, it appears certain that we will both intentionally alter our own DNA and create conscious beings far more intelligent and powerful than ourselves. Since neither of these momentous undertakings has a known endpoint and might result in our extinction, caution is indicated. Yet, we will surely do whatever it is that we are capable of doing, without restraint. I suppose these drives to proceed, caution to the wind, arise from deep within the wellspring of our humanity – this irresistible impulse to improve ourselves taking any risk. Could it be that this same drive leads us toward our own extinction? Is this the new Original Sin? Even setting aside cosmological events, that outcome can result from any one of numerous paths down which we, as a species, have embarked. Are we descending Dante’s neo – Inferno? We are rushing down many frightful paths at full speed.

Though I wish I could approach my end with hope, I see no basis for optimism as to the ultimate result of our newfound knowledge and power. Consider the interval from the beginning of the Industrial Revolution up until now in sidereal time. On that scale, in almost no time at all, we have gone from hunter-gatherers to (thinking of ourselves, at least) near gods. Is it surprising then, that our wisdom and understanding of meaning has not kept pace with our abilities to manipulate our selves and the matter which makes up our surroundings? Slam the neutrinos. Full speed ahead! It is no time for old men.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Scott Adams – thinks that computers will control humans more and more

Scott Adams has an interesting notion. It’s here on his periscope session: https://www.periscope.tv/ScottAdamsSays/1OyJANrjrpwxb

He says that initially humans control computers in almost everything but as things move along that we will get our instructions from computers. Here’s his reasoning in one example: Alexa (or Siri) gets a question that it can’t answer (and if this same question gets repeated, I assume) it is turned over to humans to resolve the complicated bits and an answer is supplied. Eventually, he’s saying, humans will be online ready to handle the unanswerable queries, they will do the research (or from their own knowledge) and supply Alexa with the answer in real time and she will provide the answer to whoever wants to know.

I started thinking that this is a bit paranoid but really it does make sense that a lot of decision making at the corporate level and government level might be put into an AI application that sends out tasks to us humans. It all comes down to the domain of possible decisions being implemented that are not highly nuanced (most are) but need a large amount of information to be able to decide. Computers might do this better eventually. The point is that we might just get an email from the AI system that will launch us on a special task.

The movie, “The Matrix” was stupid to take a wonderful idea and go so ridiculously dystopian in its plot that the humans were only there to provide energy.

I’m sure that some of you geeks are better informed on this subject but I find I’m intrigued by this and I don’t think it’s such a bad thing to happen. Probably things will be better if this happens.

I’m an Arthur C. Clarke devotee and a Steven Spielberg fan about science. I have always been upset with the fact that most SF movies are actually recast horror movies.


Users who have liked this post:

  • avatar

Saturday Night Science: Life 3.0

“Life 3.0” by Max TegmarkThe Earth formed from the protoplanetary disc surrounding the young Sun around 4.6 billion years ago. Around one hundred million years later, the nascent planet, beginning to solidify, was clobbered by a giant impactor which ejected the mass that made the Moon. This impact completely re-liquefied the Earth and Moon. Around 4.4 billion years ago, liquid water appeared on the Earth’s surface (evidence for this comes from Hadean zircons which date from this era). And, some time thereafter, just about as soon as the Earth became environmentally hospitable to life (lack of disruption due to bombardment by comets and asteroids, and a temperature range in which the chemical reactions of life can proceed), life appeared. In speaking of the origin of life, the evidence is subtle and it’s hard to be precise. There is completely unambiguous evidence of life on Earth 3.8 billion years ago, and more subtle clues that life may have existed as early as 4.28 billion years before the present. In any case, the Earth has been home to life for most of its existence as a planet.

This was what the author calls “Life 1.0”. Initially composed of single-celled organisms (which, nonetheless, dwarf in complexity of internal structure and chemistry anything produced by other natural processes or human technology to this day), life slowly diversified and organised into colonies of identical cells, evidence for which can be seen in rocks today.

About half a billion years ago, taking advantage of the far more efficient metabolism permitted by the oxygen-rich atmosphere produced by the simple organisms which preceded them, complex multi-cellular creatures sprang into existence in the “Cambrian explosion”. These critters manifested all the body forms found today, and every living being traces its lineage back to them. But they were still Life 1.0.

What is Life 1.0? Its key characteristics are that it can metabolise and reproduce, but that it can learn only through evolution. Life 1.0, from bacteria through insects, exhibits behaviour which can be quite complex, but that behaviour can be altered only by the random variation of mutations in the genetic code and natural selection of those variants which survive best in their environment. This process is necessarily slow, but given the vast expanses of geological time, has sufficed to produce myriad species, all exquisitely adapted to their ecological niches.

To put this in present-day computer jargon, Life 1.0 is “hard-wired”: its hardware (body plan and metabolic pathways) and software (behaviour in response to stimuli) are completely determined by its genetic code, and can be altered only through the process of evolution. Nothing an organism experiences or does can change its genetic programming: the programming of its descendants depends solely upon its success or lack thereof in producing viable offspring and the luck of mutation and recombination in altering the genome they inherit.

Much more recently, Life 2.0 developed. When? If you want to set a bunch of paleontologists squabbling, simply ask them when learned behaviour first appeared, but some time between the appearance of the first mammals and the ancestors of humans, beings developed the ability to learn from experience and alter their behaviour accordingly. Although some would argue simpler creatures (particularly birds) may do this, the fundamental hardware which seems to enable learning is the neocortex, which only mammalian brains possess. Modern humans are the quintessential exemplars of Life 2.0; they not only learn from experience, they’ve figured out how to pass what they’ve learned to other humans via speech, writing, and more recently, YouTube comments.

While Life 1.0 has hard-wired hardware and software, Life 2.0 is able to alter its own software. This is done by training the brain to respond in novel ways to stimuli. For example, you’re born knowing no human language. In childhood, your brain automatically acquires the language(s) you hear from those around you. In adulthood you may, for example, choose to learn a new language by (tediously) training your brain to understand, speak, read, and write that language. You have deliberately altered your own software by reprogramming your brain, just as you can cause your mobile phone to behave in new ways by downloading a new application. But your ability to change yourself is limited to software. You have to work with the neurons and structure of your brain. You might wish to have more or better memory, the ability to see more colours (as some insects do), or run a sprint as fast as the current Olympic champion, but there is nothing you can do to alter those biological (hardware) constraints other than hope, over many generations, that your descendants might evolve those capabilities. Life 2.0 can design (within limits) its software, but not its hardware.

The emergence of a new major revision of life is a big thing. In 4.5 billion years, it has only happened twice, and each time it has remade the Earth. Many technologists believe that some time in the next century (and possibly within the lives of many reading this review) we may see the emergence of Life 3.0. Life 3.0, or Artificial General Intelligence (AGI), is machine intelligence, on whatever technological substrate, which can perform as well as or better than human beings, all of the intellectual tasks which they can do. A Life 3.0 AGI will be better at driving cars, doing scientific research, composing and performing music, painting pictures, writing fiction, persuading humans and other AGIs to adopt its opinions, and every other task including, most importantly, designing and building ever more capable AGIs. Life 1.0 was hard-wired; Life 2.0 could alter its software, but not its hardware; Life 3.0 can alter both its software and hardware. This may set off an “intelligence explosion” of recursive improvement, since each successive generation of AGIs will be even better at designing more capable successors, and this cycle of refinement will not be limited to the glacial timescale of random evolutionary change, but rather an engineering cycle which will run at electronic speed. Once the AGI train pulls out of the station, it may develop from the level of human intelligence to something as far beyond human cognition as humans are compared to ants in one human sleep cycle. Here is a summary of Life 1.0, 2.0, and 3.0.

Life 1.0, 2.0, 3.0

The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don’t worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn’t malevolence, but competence”. It’s unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don’t align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes.

But isn’t this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that’s interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so).

My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.

The landscape of intelligence

Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it’s game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today.

Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants?

First of all, let’s assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That’s complicated. Here are the alternatives discussed by the author. I’ve deliberately given just the titles without summaries to stimulate your imagination about their consequences.

  • Libertarian utopia
  • Benevolent dictator
  • Egalitarian utopia
  • Gatekeeper
  • Protector god
  • Enslaved god
  • Conquerors
  • Descendants
  • Zookeeper
  • 1984
  • Reversion
  • Self-destruction

Choose wisely: whichever you choose may be the one your descendants (if any exist) may be stuck with for eternity. Interestingly, when these alternatives are discussed in chapter 5, none appears to be without serious downsides, and that’s assuming we’ll have the power to guide our future toward one of these outcomes. Or maybe we should just hope the AGIs come up with something better than we could think of. Hey, it worked for the bacteria and ants, both of which are prospering despite the occasional setback due to medical interventions or kids with magnifying glasses.

Let’s assume progress toward AGI continues over the next few decades. I believe that what I’ve been calling the “Roaring Twenties” will be a phase transition in the structure of human societies and economies. Continued exponential growth in computing power will, without any fundamental breakthroughs in our understanding of problems and how to solve them, allow us to “brute force” previously intractable problems such as driving and flying in unprepared environments, understanding and speaking natural languages, language translation, much of general practice medical diagnosis and routine legal work, interaction with customers in retail environments, and many jobs in service industries, allowing them to be automated. The cost to replace a human worker will be comparable to a year’s wages, and the automated replacement will work around the clock with only routine maintenance and never vote for a union.

This is nothing new: automation has been replacing manual labour since the 1950s, but as the intelligence landscape continues to flood, not just blue collar jobs, which have already been replaced by robots in automobile plants and electronics assembly lines, but white collar clerical and professional jobs people went into thinking them immune from automation. How will the economy cope with this? In societies with consensual government, those displaced vote; the computers who replace them don’t (at least for the moment). Will there be a “robot tax” which funds a basic income for those made redundant? What are the consequences for a society where a majority of people have no job? Will voters at some point say “enough” and put an end to development of artificial intelligence (but note that this would have to be global and enforced by an intrusive and draconian regime; otherwise it would confer a huge first mover advantage on an actor who achieved AGI in a covert program)?

The following chart is presented to illustrate stagnation of income of lower-income households since around 1970.

Income per U.S. household, 1920–2015

I’m not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn’t any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don’t have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940.

Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it.

Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it’s entirely possible it may have been an event so improbable we’ll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That’s where we come in; we’re computer professionals. We cause accidents.”).

But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn’t matter; like the bacteria and ants, we will have done our part.

The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there.

In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.

Tegmark, Max. Life 3.0. New York: Alfred A. Knopf, 2017. ISBN 978-1-101-94659-6.

This is a one hour talk by Max Tegmark at Google in December 2017 about the book and the issues discussed in it.

Watch the Google DeepMind artificial intelligence learn to play and beat Atari Breakout knowing nothing about the game other than observing the pixels on the screen and the score.

In this July 2017 video, DeepMind develops legged locomotion strategies by training in rich environments.  Its only reward was forward progress, and nothing about physics or the mechanics of locomotion was pre-programmed: it learned to walk just like a human toddler.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar