Several years ago, my son suggested I use Avast’s free anti-virus software and I have been doing so. From time to time (not so often as to offend me), Avast offers me their paid VPN service, about which, until an hour ago, I knew little. In my hour’s research, I see that a major reason people like it is to gain access to sites and services prohibited due to the geographic location of their IP address. That is not my interest.
Rather, I am interested in privacy and security. There are very-knowledgeable Ratburghers; I wonder if any of you would be willing to offer advice as to advisability of subscribing for my purposes and the likely efficacy of a subscription for my purposes. Any opinion as to the relative value of Avast? Using their freeware has disposed me favorably toward them, but I don’t want to decide based upon that bias (it is good marketing). There are many free and paid services and I find it difficult to choose, although empirically – across many areas – I tend to get what I pay for.
Given a list of cities and their locations (usually specified as Cartesian co-ordinates on a plane), what is the shortest itinerary which will visit every city exactly once and return to the point of origin?
Easy to ask, but devilishly difficult to answer…. The obvious way to solve the travelling salesman problem would be to write down all of the possible sequences in which the cities could be visited, compute the distance of each path, and then choose the smallest. But the number of possible itineraries for visiting n cities grows as the factorial of n, which is written, appropriately as “n!”. The factorial of a positive integer is the product of that number and all smaller numbers down to one. Hence 2!=2, 3!=6, 6!=720, and 10!=3,628,800. As you can see, these numbers grow very rapidly, so as you increase the number of cities, the number of paths you have to compare blows up in a combinatorial explosion which makes finding the optimal path by brute force computation a hopeless undertaking.
“But”, you ask, “computers are getting faster every year. Why not just be patient and wait a few years?” Neither you, nor I, nor the universe has sufficient patience. The box at the top of this page contains thirty cities represented by red balls placed at random in the grey square, connected by a path drawn in blue lines in the order in which they were placed. Every time you press the “Place” button, thirty new randomly-placed cities are generated; you can change the number by setting the box to the right of the button. But let’s stick with thirty cities for the nonce.
The number of possible paths along which we can visit the thirty cities is equal to the number of permutations of a set of thirty distinct members, which is equal to the factorial of the number of members, or 30!. This is a breathtakingly large number.
Now, let’s assume you had a supercomputer which was able to compute the value of a billion (109) paths per second. Chugging away at this task around the clock, without a day of rest, it would take 2.65×1023 seconds to get through the list. How long is that? About 8.4 quadrillion (1015) years, or about 600,000 times the present age of the universe. And if you modestly increased the number of cities to fifty? Prepare to wait eight thousand billion billion times the age of the universe for the answer.
Now scroll back up to the top of the page and click the “Solve” button Almost instantaneously, you’ll see a near-optimal path to tour the thirty cities with the least distance of travel. Try clicking “Place” and then “Solve” several times to create and solve new problems, then increase the number of cities to 50 and then 100 and try solving those problems. In each case, the solution appears in a fraction of a second. Now, these solutions are not guaranteed to be absolutely optimal; they may be a percent or two longer than the absolute best path (if you click “Solve” multiple times, you may see several different solutions, all of which are close in total path length). They’re not perfect, but then you don’t have to wait huge multiples of the age of the universe for the result. How did we do it?
This page attacks the travelling salesman problem through a technique of combinatorial optimisation called simulated annealing. By analogy with the process of annealing a material such as metal or glass by raising it to a high temperature and then gradually reducing the temperature, allowing local regions of order to grow outward, increasing ductility and reducing stresses in the material, the algorithm randomly perturbs the original path to a decreasing extent according to a gradually decreasing logical “temperature”.
In simulated annealing, the equivalent of temperature is a measure of the randomness by which changes are made to the path, seeking to minimise it. When the temperature is high, larger random changes are made, avoiding the risk of becoming trapped in a local minimum (of which there are usually many in a typical travelling salesman problem), then homing in on a near-optimal minimum as the temperature falls. The temperature falls in a series of steps on an exponential decay schedule where, on each step, the temperature is 0.9 times that of the previous step.
The process of annealing starts with a path which simply lists all of the cities in the order their positions were randomly selected (this is the path you’ll see after pressing the “Place” button). On each temperature step, a number of random transformations of the path are made. First of all, a segment of the path is selected, with its start and end cities chosen at random. Then, a software coin is flipped to decide which kind of transformation to try: reverse or transport.
If reverse comes up, an alternative path is generated in which the cities in the chosen segment are reversed in order of visit. If transport, the segment is clipped out of its current position in the path and spliced in at a randomly chosen point in the remainder of the path. The length of the modified path is then calculated and compared to the path before modification, producing a quantity called the cost difference. If negative, the modified path is shorter than the original path and always replaces it. If there is an increase in cost, however, the exponential of its negative magnitude divided by the current temperature is compared to a uniformly distributed random number between 0 and 1 and, if greater, the modified path will be used even though it increased the cost. Note that initially, when the temperature is high, there will be a greater probability of making such changes, but that as the temperature falls, only smaller increases in cost will be accepted. The total number of changes tested at each temperature level is arbitrarily set to 100 times the number of cities in the path, and after ten times the number of changes which decrease the path length as the number of cities are found, the temperature is decreased and the search continued. If, after trying all of the potential changes at a given temperature level, no changes are found which reduce the path length, the solution is considered “good enough” and the resulting path is displayed.
Watching it Happen
To watch the optimisation process as it unfolds, instead of pressing the “Solve” button, press the “Step” button to see the path evolve at each level of decreasing temperature. The “Animate” button will automatically show the path evolving at one second per temperature level. Check the “Trace solution” box to display the temperature, cost (path length), and number of changes made to the path at each step in the optimisation. After a solution is found, the chosen itinerary will be shown listing the cities in order, their co-ordinates, and the cost of the path from each city to the next (wrapping around at the bottom) and, if the path crosses the river (see below), an “R” to indicate that it does.
Instead of using the “Place” button to randomly place cities, you can place them manually by pressing “New” to clear the map and then click the mouse in the map to indicate the city locations. They will initially be connected by paths in the order you placed the cities. You can also add cities to maps created by the “Place” button by clicking in the map.
Minimise or Maximise?
The travelling salesman problem is usually formulated in terms of minimising the path length to visit all of the cities, but the process of simulated annealing works just as well with a goal of maximising the length of the itinerary. If you change the goal in the drop-down list from “Minimise” to “Maximise”, the cost function being optimised will be the negative of the path length, resulting in a search for the longest path. Try it, and see how the annealing process finds a star-like pattern that chooses the longest inter-city paths.
A River Runs through It
We can add another wrinkle to the cost function by adding a “river” that runs through the map from top to bottom halfway across it. If you set the “River cost” nonzero, the river will be drawn as a dark blue line, and any path from one city to another which crosses it is assessed a penalty given by the river cost as a percentage of the size of the map. If you set the river cost high, say to 50%, you’ll find a strong preference for paths which only cross the river twice, finding a near-minimum path length independently on each side of the river. (This may be more apparent if you place a large number of cities, say 100 or 250.)
You can also set the cost of crossing the river negative, which turns the travelling salesman into a peripatetic smuggler who profits from carrying goods between Freedonia and Grand Fenwick. Try placing 100 cities and setting the river cost to −25: the smuggler will settle on an efficient path on each side of the river, but prefer river crossings between cities close to the river where the benefit of the crossing is significant compared to the distance between them.
Finally, try setting the goal to Maximise path length, the river crossing cost to −100 (benefit from crossing the river), and place 100 cities. When you solve, you’ll find the solution produces two star-like patterns on each side of the river which maximises the travel distance on each side, but avoids river crossings at all costs.
Other Optimisation Techniques
Here we’ve explored one technique of combinatorial optimisation: simulated annealing. This is only one of the many approaches which have been taken to problems of this kind. These include exact approaches such as branch and bound, linear programming, and cutting-plane algorithms.
Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C, 2nd ed. Cambridge: Cambridge University Press,  1992. ISBN 978-0-521-43108-8. Section 10.9, pp. 444–451. (A third edition of this book with algorithms in C++ is available.)
As an official geezer, oldster and curmudgeon, I can take delight in the role proscribed. I can denigrate new things as “no big deal” and “Who needs that?” with a withering and wizened stare, the awesomeness of my experience backing up my delight in happy balloon puncturing. (Not available for parties)
However, I am also a gadget monger, a gear aficionado and person who enjoys having the needed tool in the truck, or their pocket or down in the basement cabinet.
It is great to imagine the probability of whether I should have a 48 volt combination chainsaw/winch just in case I break down in the wilderness during a zombie apocalypse and inbound tsunami. Such decision models are entertainment of a high order.
So last weekend, I finally got the time to try out a Christmas gift of Oculus VR googles for the Samsung Galaxy Phone. They had been sitting there , next to my post dinner chair, daring me to open them and get sucked down another tech timesucking rathole.
The Red Headed Irish Wisecraker demanded it , since NBC was advertising their Olympics VR app and she wanted to see figure skating with the new tech.
So I opened the box, and proceeded to read the manual, an old habit but one which has held me in good stead for decades.
I will say, setting aside the curmudgeon pose, that it was fun. NBC delivered crap, but the other stuff was fun. I was particularly fond of the offering by a guy who para skied with a 360 cam on his rig.
It felt like playing with a reel to reel tape recorder when they were the high tech fashion , or playing pong at a table in a bar, or watching the content explosion when CDs were suddenly available on most PCs.. The tech is still early, but it has immense promise.
Oh well, another tech entertainment rathole to spend time on.
So, fellow Ratburgers, what is your impression of the VR offerings to date? Don’t be shy, just scribble in the comment section below.
My Grandson is 14, and as such is hitting the curve where he is too cool to want the company of old folks. I remember, even though it was several geologic ages ago.
He is a good kid who is currently in love with videogames, alternating between the PS4 and the I7 CPU, Nvidia Graphics Windows Machine I put together for him, with the mechanical keyboard and curved screen.
I observed a level of kid culture I knew intellectually but has to be experienced to be real. He was trying out the windows machine at my place on his birthday and asked , very politely if I had a headset he could use. It was an option I had not added, so we trekked over to Frys Electronics to get him his own, not wanting to part with my Sennheiser.
Once we returned, I watched him transform from a quiet teen into Mr Personality as he was online with his team mates, mostly scattered around his home 100 miles south of us.
He was commanding, quick witted and ran his platoon like a cross between Sergeant Rock and Johnny Carson.
It was amazing to me, since his social environment is very different than the one I knew where we left our houses , gathered at a meeting point and wandered the town to various places of interest, joking and insulting each other with improving skill.
I sat back and wondered at what I had seen and determined that we need to blow up the entire model of schooling and education. We are boring the living crap out of our kids.
We have technologies which are absorbing and build social groups around immersive game play.
He learned more about a piece of WW2 that afternoon than I expect his public school will impart during his entire stay. He was fascinated when I told him his great great uncle actually fought on the battlefield he described.
(I still believe reimaging education will be the largest business opportunity this century, making it the transcontinental railroad of the 21st century. All the pieces are there).
To finish the story, Grandson Number One and I finished his visit with a three game tournament of Chess using real pieces on a real board Sunday morning before he left with his Mom.
At age 24, I found myself attending the Faculté de Médicine of the University of Lausanne, Switzerland. Incidental to the storyline is the fact that, a child of the ’60’s, my academic performance was erratic: all A’s one semester, half C’s the next. This all preceded the great grade inflation in the academy, thus grades at the time reflected actual performance. I was thus not accepted to the US medical schools to which I applied. It may also have had something to do with the fact that I was suspended from school in my (first) junior year, because my roommate and I built a bomb and blew up a tree as a prank – which turned out to be literally earthshaking and attracted unwanted attention from the authorities. We were arrested, pled guilty to malicious mischief, made financial restitution and served one year probation. I was pleasantly surprised, in retrospect and in light of subsequent pyrotechnic events of various stripes, to have never had a visit from the FBI or ATF.
[digression mode – ‘off’] It was in beautiful Lausanne I was first exposed to the majesty of real mountains. The guilt I felt from my father’s paying my way led me to few trips into the mountains, so I initially enjoyed them only from afar. It soon became apparent to me that visual reference to the distant mountains (13 miles directly across Lake Geneva is ´Evian, France and just behind and to the east of it the Alps rise one mile vertically above the surface of the lake). Other mountains farther east were visible as well, on clear days.
In addition to the esthetic sense the mountains gave me, they also functioned as a primordial sort of GPS (or Lausanne Positioning System). By a kind of intuitive triangulation, I discovered I had a sense of my location in and around the city and, for that matter, throughout the “Suisse Romande” (in airplane cockpits, where situational awareness is sacred – so as to avoid the necessity of receiving other, more final sacraments – one has an HSI – a Horizontal Situation Indicator). Beyond its practical benefits, this phenomenon gave me a deep, almost mystical sense of comfort and reassurance. I have pondered the meaning of this, off and on, for many years (I am a self-confessed “meanings” junkie).
I suspect the comfort and reassurance I felt comes from a deep place in the human pedigree. Situational awareness, as we now call it, undoubtedly had great survival value (as did recognizing friend from foe – but that is another post on why humans have racist tendencies by virtue of their default wiring). Now, my life surely did not depend on knowing precisely where I was, yet I had (still have) an abiding sense this is rooted in something deep and pervasive in the human psyche.
I think this need for “situational awareness,” is multi-dimensional, beyond the merely physical. I believe it is generalizable beyond awareness of geographic position to all reflective human thinking, to all consciousness of self. It partakes in what seems to me to be an equally deep need to understand absolutely everything about our surroundings – macro, micro, north, east, west, south, up, down, past, present and future. In myself, I am driven to understand everything I can. As better evidence of the existence and power of this deep thirst for broad knowledge and mastery I cite various individuals throughout history, now called “renaissance (wo)men;” our own John Walker is a living example, known to us by he generous sharing of knowledge. The mastery of our surroundings – the entire human habitat and even places formerly uninhabitable – witnessed particularly since the industrial revolution, is nothing short of breathtaking. The brevity of the period in which this has occurred is nothing short of miraculous, viewed in the sidereal or even merely the geologic time scale. This feat reflects what can be only the fruition of the most fundamental motivations rooted in our being as a species.
Some undoubtedly think of this drive, this search for infinite knowledge, as the quest for God, or believe it represents the spark of God within humanity. Maybe so. More recent philosophers suggest humanity verges upon God-like powers and thus will, effectively, become Gods. This assertion immediately brings the First Commandment to my mind, despite not practicing a religion. It gives me pause.
I wonder if Ratburghers (sic) share any of these impression or thoughts, particularly about my initial fascination with geographic reference to distant things like mountains. This initial impression has led to my presently spending a good bit of time on Google Earth and in the past few months Google Earth VR, as seen through Oculus Rift virtual reality headset. It is, for want of better words, really cool “flying” anywhere at any height, hovering, zooming in or out.” As well, I have downloaded a good bit of DLC (downloadable content) scenery for my two flight simulators (one using VR and another). I just can’t seem to get enough of seeing the Earth from above and witnessing the relationships of all things in my ken.
The Earth formed from the protoplanetary disc surrounding the young Sun around 4.6 billion years ago. Around one hundred million years later, the nascent planet, beginning to solidify, was clobbered by a giant impactor which ejected the mass that made the Moon. This impact completely re-liquefied the Earth and Moon. Around 4.4 billion years ago, liquid water appeared on the Earth’s surface (evidence for this comes from Hadean zircons which date from this era). And, some time thereafter, just about as soon as the Earth became environmentally hospitable to life (lack of disruption due to bombardment by comets and asteroids, and a temperature range in which the chemical reactions of life can proceed), life appeared. In speaking of the origin of life, the evidence is subtle and it’s hard to be precise. There is completely unambiguous evidence of life on Earth 3.8 billion years ago, and more subtle clues that life may have existed as early as 4.28 billion years before the present. In any case, the Earth has been home to life for most of its existence as a planet.
This was what the author calls “Life 1.0”. Initially composed of single-celled organisms (which, nonetheless, dwarf in complexity of internal structure and chemistry anything produced by other natural processes or human technology to this day), life slowly diversified and organised into colonies of identical cells, evidence for which can be seen in rocks today.
About half a billion years ago, taking advantage of the far more efficient metabolism permitted by the oxygen-rich atmosphere produced by the simple organisms which preceded them, complex multi-cellular creatures sprang into existence in the “Cambrian explosion”. These critters manifested all the body forms found today, and every living being traces its lineage back to them. But they were still Life 1.0.
What is Life 1.0? Its key characteristics are that it can metabolise and reproduce, but that it can learn only through evolution. Life 1.0, from bacteria through insects, exhibits behaviour which can be quite complex, but that behaviour can be altered only by the random variation of mutations in the genetic code and natural selection of those variants which survive best in their environment. This process is necessarily slow, but given the vast expanses of geological time, has sufficed to produce myriad species, all exquisitely adapted to their ecological niches.
To put this in present-day computer jargon, Life 1.0 is “hard-wired”: its hardware (body plan and metabolic pathways) and software (behaviour in response to stimuli) are completely determined by its genetic code, and can be altered only through the process of evolution. Nothing an organism experiences or does can change its genetic programming: the programming of its descendants depends solely upon its success or lack thereof in producing viable offspring and the luck of mutation and recombination in altering the genome they inherit.
Much more recently, Life 2.0 developed. When? If you want to set a bunch of paleontologists squabbling, simply ask them when learned behaviour first appeared, but some time between the appearance of the first mammals and the ancestors of humans, beings developed the ability to learn from experience and alter their behaviour accordingly. Although some would argue simpler creatures (particularly birds) may do this, the fundamental hardware which seems to enable learning is the neocortex, which only mammalian brains possess. Modern humans are the quintessential exemplars of Life 2.0; they not only learn from experience, they’ve figured out how to pass what they’ve learned to other humans via speech, writing, and more recently, YouTube comments.
While Life 1.0 has hard-wired hardware and software, Life 2.0 is able to alter its own software. This is done by training the brain to respond in novel ways to stimuli. For example, you’re born knowing no human language. In childhood, your brain automatically acquires the language(s) you hear from those around you. In adulthood you may, for example, choose to learn a new language by (tediously) training your brain to understand, speak, read, and write that language. You have deliberately altered your own software by reprogramming your brain, just as you can cause your mobile phone to behave in new ways by downloading a new application. But your ability to change yourself is limited to software. You have to work with the neurons and structure of your brain. You might wish to have more or better memory, the ability to see more colours (as some insects do), or run a sprint as fast as the current Olympic champion, but there is nothing you can do to alter those biological (hardware) constraints other than hope, over many generations, that your descendants might evolve those capabilities. Life 2.0 can design (within limits) its software, but not its hardware.
The emergence of a new major revision of life is a big thing. In 4.5 billion years, it has only happened twice, and each time it has remade the Earth. Many technologists believe that some time in the next century (and possibly within the lives of many reading this review) we may see the emergence of Life 3.0. Life 3.0, or Artificial General Intelligence (AGI), is machine intelligence, on whatever technological substrate, which can perform as well as or better than human beings, all of the intellectual tasks which they can do. A Life 3.0 AGI will be better at driving cars, doing scientific research, composing and performing music, painting pictures, writing fiction, persuading humans and other AGIs to adopt its opinions, and every other task including, most importantly, designing and building ever more capable AGIs. Life 1.0 was hard-wired; Life 2.0 could alter its software, but not its hardware; Life 3.0 can alter both its software and hardware. This may set off an “intelligence explosion” of recursive improvement, since each successive generation of AGIs will be even better at designing more capable successors, and this cycle of refinement will not be limited to the glacial timescale of random evolutionary change, but rather an engineering cycle which will run at electronic speed. Once the AGI train pulls out of the station, it may develop from the level of human intelligence to something as far beyond human cognition as humans are compared to ants in one human sleep cycle. Here is a summary of Life 1.0, 2.0, and 3.0.
The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don’t worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn’t malevolence, but competence”. It’s unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don’t align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes.
But isn’t this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that’s interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so).
My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.
Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it’s game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today.
Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants?
First of all, let’s assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That’s complicated. Here are the alternatives discussed by the author. I’ve deliberately given just the titles without summaries to stimulate your imagination about their consequences.
Choose wisely: whichever you choose may be the one your descendants (if any exist) may be stuck with for eternity. Interestingly, when these alternatives are discussed in chapter 5, none appears to be without serious downsides, and that’s assuming we’ll have the power to guide our future toward one of these outcomes. Or maybe we should just hope the AGIs come up with something better than we could think of. Hey, it worked for the bacteria and ants, both of which are prospering despite the occasional setback due to medical interventions or kids with magnifying glasses.
Let’s assume progress toward AGI continues over the next few decades. I believe that what I’ve been calling the “Roaring Twenties” will be a phase transition in the structure of human societies and economies. Continued exponential growth in computing power will, without any fundamental breakthroughs in our understanding of problems and how to solve them, allow us to “brute force” previously intractable problems such as driving and flying in unprepared environments, understanding and speaking natural languages, language translation, much of general practice medical diagnosis and routine legal work, interaction with customers in retail environments, and many jobs in service industries, allowing them to be automated. The cost to replace a human worker will be comparable to a year’s wages, and the automated replacement will work around the clock with only routine maintenance and never vote for a union.
This is nothing new: automation has been replacing manual labour since the 1950s, but as the intelligence landscape continues to flood, not just blue collar jobs, which have already been replaced by robots in automobile plants and electronics assembly lines, but white collar clerical and professional jobs people went into thinking them immune from automation. How will the economy cope with this? In societies with consensual government, those displaced vote; the computers who replace them don’t (at least for the moment). Will there be a “robot tax” which funds a basic income for those made redundant? What are the consequences for a society where a majority of people have no job? Will voters at some point say “enough” and put an end to development of artificial intelligence (but note that this would have to be global and enforced by an intrusive and draconian regime; otherwise it would confer a huge first mover advantage on an actor who achieved AGI in a covert program)?
The following chart is presented to illustrate stagnation of income of lower-income households since around 1970.
I’m not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn’t any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don’t have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940.
Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it.
Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it’s entirely possible it may have been an event so improbable we’ll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That’s where we come in; we’re computer professionals. We cause accidents.”).
But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn’t matter; like the bacteria and ants, we will have done our part.
The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there.
In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.
Tegmark, Max. Life 3.0. New York: Alfred A. Knopf, 2017. ISBN 978-1-101-94659-6.
This is a one hour talk by Max Tegmark at Google in December 2017 about the book and the issues discussed in it.
Watch the Google DeepMind artificial intelligence learn to play and beat Atari Breakout knowing nothing about the game other than observing the pixels on the screen and the score.
In this July 2017 video, DeepMind develops legged locomotion strategies by training in rich environments. Its only reward was forward progress, and nothing about physics or the mechanics of locomotion was pre-programmed: it learned to walk just like a human toddler.
We seem to be approaching a point of change in the ongoing struggle with normals and the elites.
As the elite control of the news media is slipping away, and the entertainment monolith is fragmenting into thousands of options, where 50 Shades of Gray coexists with Duck Dynasty, the desperate elites are moving to social media as nuclear powered peer pressure.
Just as the Second Amendment stands in their way, the First Amendment is truly their enemy.
The millennials and their successors are the focus of forcing conformity of thinking by shunning, ‘hate speech’ designations and the crushing of alternate viewpoints. Concepts like “toxic masculinity” are really attacks on individuality. Similar attacks on “pro life” as religious extremism.
Nuclear powered Ridicule is our best weapon. This will not be a fair fight, or even one with rules. The role of the old folks in this war is to show what free speech and thinking for yourself can do for those afflicted with the social media groupthink onslaught on the younger folks.
You may have heard about the discovery of a major security hole affecting most recent Intel microprocessors which allows a process running on a computer to exploit a side-channel attack to read privileged information from the operating system or other processes on the same machine.
Here is the technical paper describing this exploit. This was reported by those who discovered it to the major CPU manufacturers: Intel, AMD, and ARM on 2017-06-01, but kept secret to allow time for mitigation to be put into place.
This is one of the most serious hardware problems to have discovered in mass-produced microprocessors since the notorious Intel Pentium floating point divide bug in 1994. It is difficult to exploit this bug, but it defeats the security of systems running on these processors at a fundamental level and is costly to mitigate in software, with a performance hit of up to 30% for programs which make a large number of system calls.
The bug is due to interaction between memory protection, the processor’s cache, and a performance tweak called “speculative execution”, in which when encountering a conditional branch the CPU goes both ways and then discards the path not taken after it completes evaluation of the conditional upon which the branch depends. Unless you’re deeply marinated in CPU architecture, all of this may sound like gibberish, but if you’re using a computer with an Intel microprocessor, it affects you.
Fortunately, Master Explainer Scott Manley has recorded this excellent twelve minute video which provides a gentle introduction to the bug and its consequences.