TOTD 2018-03-30: The Moon, Up Close and Personal

Since 1994, Fourmilab’s Earth and Moon Viewer has provided custom views of the Earth and Moon from a variety of viewpoints, using imagery databases which have evolved over the years from primitive images to gigabyte-scale mosaics collected by spacecraft.  Views were originally restricted to the Earth, but fifteen years ago, in April 2003, the ability to view the Moon was added, using the global imagery collected by the Clementine orbiter.  These data were wonderful for the time, providing full-globe topography and albedo databases with a resolution of 1440×720 pixels.  This allowed viewing the Moon as a whole or modest zooms into localities, but when you zoomed in close the results were…disappointing.  Here is the crater Copernicus viewed from an altitude of 10 km using the Clementine data.

Moon, Copernicus crater, 10 km altitude, Clementine data

It looks kind of like a crater, but it leaves you wanting more.

That was then, and this is now.  In 2009, the Lunar Reconnaissance Orbiter (LRO) was launched into a near-polar orbit around the Moon.  In its orbit, it is able to photograph the entire lunar surface from an altitude as low as 20 km, with very high resolution.  This has enabled the assembly of a global mosaic image with resolution of 100 metres per pixel (total image size is 109164×54582 pixels), or about 5.6 gigabytes of 256-level grey scale pixels).  This image database is now available in Earth and Moon Viewer.  Here is the same view of Copernicus using the LRO imagery.

Copernicus crater, 10 km altitude, Lunar Reconnaissance Orbiter imagery

Bit of a difference, don’t you think?  But it doesn’t stop there.  Let’s swoop down to 1 km above the surface and  look at the central peaks.

Note the small craters and boulder fields which are completely invisible with even the best Earth-based telescopes.

Thanks to LRO, you can now explore the Moon seeing views that only astronauts who orbited, flew by, or landed there have ever seen with their own eyes.  And the entire Moon is yours to explore, including all of the far side and the poles, where Apollo missions never ventured.

The Clementine and LRO imagery were collected a decade apart.  The technology which has enabled this improvement continues to grow exponentially.  The Roaring Twenties are going to be interesting.

12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: Life 3.0

“Life 3.0” by Max TegmarkThe Earth formed from the protoplanetary disc surrounding the young Sun around 4.6 billion years ago. Around one hundred million years later, the nascent planet, beginning to solidify, was clobbered by a giant impactor which ejected the mass that made the Moon. This impact completely re-liquefied the Earth and Moon. Around 4.4 billion years ago, liquid water appeared on the Earth’s surface (evidence for this comes from Hadean zircons which date from this era). And, some time thereafter, just about as soon as the Earth became environmentally hospitable to life (lack of disruption due to bombardment by comets and asteroids, and a temperature range in which the chemical reactions of life can proceed), life appeared. In speaking of the origin of life, the evidence is subtle and it’s hard to be precise. There is completely unambiguous evidence of life on Earth 3.8 billion years ago, and more subtle clues that life may have existed as early as 4.28 billion years before the present. In any case, the Earth has been home to life for most of its existence as a planet.

This was what the author calls “Life 1.0”. Initially composed of single-celled organisms (which, nonetheless, dwarf in complexity of internal structure and chemistry anything produced by other natural processes or human technology to this day), life slowly diversified and organised into colonies of identical cells, evidence for which can be seen in rocks today.

About half a billion years ago, taking advantage of the far more efficient metabolism permitted by the oxygen-rich atmosphere produced by the simple organisms which preceded them, complex multi-cellular creatures sprang into existence in the “Cambrian explosion”. These critters manifested all the body forms found today, and every living being traces its lineage back to them. But they were still Life 1.0.

What is Life 1.0? Its key characteristics are that it can metabolise and reproduce, but that it can learn only through evolution. Life 1.0, from bacteria through insects, exhibits behaviour which can be quite complex, but that behaviour can be altered only by the random variation of mutations in the genetic code and natural selection of those variants which survive best in their environment. This process is necessarily slow, but given the vast expanses of geological time, has sufficed to produce myriad species, all exquisitely adapted to their ecological niches.

To put this in present-day computer jargon, Life 1.0 is “hard-wired”: its hardware (body plan and metabolic pathways) and software (behaviour in response to stimuli) are completely determined by its genetic code, and can be altered only through the process of evolution. Nothing an organism experiences or does can change its genetic programming: the programming of its descendants depends solely upon its success or lack thereof in producing viable offspring and the luck of mutation and recombination in altering the genome they inherit.

Much more recently, Life 2.0 developed. When? If you want to set a bunch of paleontologists squabbling, simply ask them when learned behaviour first appeared, but some time between the appearance of the first mammals and the ancestors of humans, beings developed the ability to learn from experience and alter their behaviour accordingly. Although some would argue simpler creatures (particularly birds) may do this, the fundamental hardware which seems to enable learning is the neocortex, which only mammalian brains possess. Modern humans are the quintessential exemplars of Life 2.0; they not only learn from experience, they’ve figured out how to pass what they’ve learned to other humans via speech, writing, and more recently, YouTube comments.

While Life 1.0 has hard-wired hardware and software, Life 2.0 is able to alter its own software. This is done by training the brain to respond in novel ways to stimuli. For example, you’re born knowing no human language. In childhood, your brain automatically acquires the language(s) you hear from those around you. In adulthood you may, for example, choose to learn a new language by (tediously) training your brain to understand, speak, read, and write that language. You have deliberately altered your own software by reprogramming your brain, just as you can cause your mobile phone to behave in new ways by downloading a new application. But your ability to change yourself is limited to software. You have to work with the neurons and structure of your brain. You might wish to have more or better memory, the ability to see more colours (as some insects do), or run a sprint as fast as the current Olympic champion, but there is nothing you can do to alter those biological (hardware) constraints other than hope, over many generations, that your descendants might evolve those capabilities. Life 2.0 can design (within limits) its software, but not its hardware.

The emergence of a new major revision of life is a big thing. In 4.5 billion years, it has only happened twice, and each time it has remade the Earth. Many technologists believe that some time in the next century (and possibly within the lives of many reading this review) we may see the emergence of Life 3.0. Life 3.0, or Artificial General Intelligence (AGI), is machine intelligence, on whatever technological substrate, which can perform as well as or better than human beings, all of the intellectual tasks which they can do. A Life 3.0 AGI will be better at driving cars, doing scientific research, composing and performing music, painting pictures, writing fiction, persuading humans and other AGIs to adopt its opinions, and every other task including, most importantly, designing and building ever more capable AGIs. Life 1.0 was hard-wired; Life 2.0 could alter its software, but not its hardware; Life 3.0 can alter both its software and hardware. This may set off an “intelligence explosion” of recursive improvement, since each successive generation of AGIs will be even better at designing more capable successors, and this cycle of refinement will not be limited to the glacial timescale of random evolutionary change, but rather an engineering cycle which will run at electronic speed. Once the AGI train pulls out of the station, it may develop from the level of human intelligence to something as far beyond human cognition as humans are compared to ants in one human sleep cycle. Here is a summary of Life 1.0, 2.0, and 3.0.

Life 1.0, 2.0, 3.0

The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don’t worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn’t malevolence, but competence”. It’s unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don’t align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes.

But isn’t this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that’s interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so).

My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.

The landscape of intelligence

Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it’s game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today.

Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants?

First of all, let’s assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That’s complicated. Here are the alternatives discussed by the author. I’ve deliberately given just the titles without summaries to stimulate your imagination about their consequences.

  • Libertarian utopia
  • Benevolent dictator
  • Egalitarian utopia
  • Gatekeeper
  • Protector god
  • Enslaved god
  • Conquerors
  • Descendants
  • Zookeeper
  • 1984
  • Reversion
  • Self-destruction

Choose wisely: whichever you choose may be the one your descendants (if any exist) may be stuck with for eternity. Interestingly, when these alternatives are discussed in chapter 5, none appears to be without serious downsides, and that’s assuming we’ll have the power to guide our future toward one of these outcomes. Or maybe we should just hope the AGIs come up with something better than we could think of. Hey, it worked for the bacteria and ants, both of which are prospering despite the occasional setback due to medical interventions or kids with magnifying glasses.

Let’s assume progress toward AGI continues over the next few decades. I believe that what I’ve been calling the “Roaring Twenties” will be a phase transition in the structure of human societies and economies. Continued exponential growth in computing power will, without any fundamental breakthroughs in our understanding of problems and how to solve them, allow us to “brute force” previously intractable problems such as driving and flying in unprepared environments, understanding and speaking natural languages, language translation, much of general practice medical diagnosis and routine legal work, interaction with customers in retail environments, and many jobs in service industries, allowing them to be automated. The cost to replace a human worker will be comparable to a year’s wages, and the automated replacement will work around the clock with only routine maintenance and never vote for a union.

This is nothing new: automation has been replacing manual labour since the 1950s, but as the intelligence landscape continues to flood, not just blue collar jobs, which have already been replaced by robots in automobile plants and electronics assembly lines, but white collar clerical and professional jobs people went into thinking them immune from automation. How will the economy cope with this? In societies with consensual government, those displaced vote; the computers who replace them don’t (at least for the moment). Will there be a “robot tax” which funds a basic income for those made redundant? What are the consequences for a society where a majority of people have no job? Will voters at some point say “enough” and put an end to development of artificial intelligence (but note that this would have to be global and enforced by an intrusive and draconian regime; otherwise it would confer a huge first mover advantage on an actor who achieved AGI in a covert program)?

The following chart is presented to illustrate stagnation of income of lower-income households since around 1970.

Income per U.S. household, 1920–2015

I’m not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn’t any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don’t have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940.

Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it.

Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it’s entirely possible it may have been an event so improbable we’ll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That’s where we come in; we’re computer professionals. We cause accidents.”).

But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn’t matter; like the bacteria and ants, we will have done our part.

The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there.

In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.

Tegmark, Max. Life 3.0. New York: Alfred A. Knopf, 2017. ISBN 978-1-101-94659-6.

This is a one hour talk by Max Tegmark at Google in December 2017 about the book and the issues discussed in it.

Watch the Google DeepMind artificial intelligence learn to play and beat Atari Breakout knowing nothing about the game other than observing the pixels on the screen and the score.

In this July 2017 video, DeepMind develops legged locomotion strategies by training in rich environments.  Its only reward was forward progress, and nothing about physics or the mechanics of locomotion was pre-programmed: it learned to walk just like a human toddler.

8+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Book Review: Last Call: The Rise and Fall of Prohibition

“Last Call” by Daniel OkrentThe ratification of the Eighteenth Amendment to the U.S. Constitution in 1919, prohibiting the “manufacture, sale, or transportation of intoxicating liquors” marked the transition of the U.S. Federal government into a nanny state, which occupied itself with the individual behaviour of its citizens. Now, certainly, attempts to legislate morality and regulate individual behaviour were commonplace in North America long before the United States came into being, but these were enacted at the state, county, or municipality level. When the U.S. Constitution was ratified, it exclusively constrained the actions of government, not of individual citizens, and with the sole exception of the Thirteenth Amendment, which abridged the “freedom” to hold people in slavery and involuntary servitude, this remained the case into the twentieth century. While bans on liquor were adopted in various jurisdictions as early as 1840, it simply never occurred to many champions of prohibition that a nationwide ban, written into the federal constitution, was either appropriate or feasible, especially since taxes on alcoholic beverages accounted for as much as forty percent of federal tax revenue in the years prior to the introduction of the income tax, and imposition of total prohibition would zero out the second largest source of federal income after the tariff.

As the Progressive movement gained power, with its ambitions of continental scale government and imposition of uniform standards by a strong, centralised regime, it found itself allied with an improbable coalition including the Woman’s Christian Temperance Union; the Methodist, Baptist and Presbyterian churches; advocates of women’s suffrage; the Anti-Saloon League; Henry Ford; and the Ku Klux Klan. Encouraged by the apparent success of “war socialism” during World War I and empowered by enactment of the Income Tax via the Sixteenth Amendment, providing another source of revenue to replace that of excise taxes on liquor, these players were motivated in the latter years of the 1910s to impose their agenda upon the entire country in as permanent a way as possible: by a constitutional amendment. Although the supermajorities required were daunting (two thirds in the House and Senate to submit, three quarters of state legislatures to ratify), if a prohibition amendment could be pushed over the bar (if you’ll excuse the term), opponents would face what was considered an insuperable task to reverse it, as it would only take 13 dry states to block repeal.

Further motivating the push not just for a constitutional amendment, but enacting one as soon as possible, were the rapid demographic changes underway in the U.S. Support for prohibition was primarily rural, in southern and central states, Protestant, and Anglo-Saxon. During the 1910s, population was shifting from farms to urban areas, from the midland toward the coasts, and the immigrant population of Germans, Italians, and Irish who were famously fond of drink was burgeoning. This meant that the electoral landscape following reapportionment after the 1920 census would be far less receptive to the foes of Demon Rum.

One must never underestimate the power of an idea whose time has come, regardless of how stupid and counterproductive it might be. And so it came to pass that the Eighteenth Amendment was ratified by the 36th state: Utah, appropriately, on January 16th, 1919, with nationwide Prohibition to come into effect a year hence. From the outset, it was pretty obvious to many astute observers what was about happen. An Army artillery captain serving in France wrote to his fiancée in Missouri, “It looks to me like the moonshine business is going to be pretty good in the land of the Liberty Loans and Green Trading Stamps, and some of us want to get in on the ground floor. At least we want to get there in time to lay in a supply for future consumption.” Captain Harry S. Truman ended up pursuing a different (and probably less lucrative career), but was certainly prescient about the growth industry of the coming decade.

From the very start, Prohibition was a theatre of the absurd. Since it was enforced by a federal statute, the Volstead Act, enforcement, especially in states which did not have their own state Prohibition laws, was the responsibility of federal agents within the Treasury Department, whose head, Andrew Mellon, was a staunch opponent of Prohibition. Enforcement was always absurdly underfunded compared to the magnitude of the bootlegging industry and their customers (the word “scofflaw” entered the English language to describe them). Federal Prohibition officers were paid little, but were nonetheless highly prized patronage jobs, as their holders could often pocket ten times their salary in bribes to look the other way.

Prohibition unleashed the American talent for ingenuity, entrepreneurship, and the do-it-yourself spirit. While it was illegal to manufacture liquor for sale or to sell it, possession and consumption were perfectly legal, and families were allowed to make up to 200 gallons (which should suffice even for the larger, more thirsty households of the epoch) for their own use. This led to a thriving industry in California shipping grapes eastward for householders to mash into “grape juice” for their own use, being careful, of course, not to allow it to ferment or to sell some of their 200 gallon allowance to the neighbours. Later on, the “Vino Sano Grape Brick” was marketed nationally. Containing dried crushed grapes, complete with the natural yeast on the skins, you just added water, waited a while, and hoisted a glass to American innovation. Brewers, not to be outdone, introduced “malt syrup”, which with the addition of yeast and water, turned into beer in the home brewer’s basement. Grocers stocked everything the thirsty householder needed to brew up case after case of Old Frothingslosh, and brewers remarked upon how profitable it was to outsource fermentation and bottling to the customers.

For those more talented in manipulating the law than fermenting fluids, there were a number of opportunities as well. Sacramental wine was exempted from Prohibition, and wineries which catered to Catholic and Jewish congregations distributing such wines prospered. Indeed, Prohibition enforcers noted they’d never seen so many rabbis before, including some named Patrick Houlihan and James Maguire. Physicians and dentists were entitled to prescribe liquor for medicinal purposes, and the lucrative fees for writing such prescriptions and for pharmacists to fill them rapidly caused hard liquor to enter the materia medica for numerous maladies, far beyond the traditional prescription as snakebite medicine. While many pre-Prohibition bars re-opened as speakeasies, others prospered by replacing “Bar” with ”Drug Store” and filling medicinal whiskey prescriptions for the same clientele.

Apart from these dodges, the vast majority of Americans slaked their thirst with bootleg booze, either domestic (and sometimes lethal), or smuggled from Canada or across the ocean. The obscure island of St. Pierre, a French possession off the coast of Canada, became a prosperous entrepôt for reshipment of Canadian liquor legally exported to “France”, then re-embarked on ships headed for “Rum Row”, just outside the territorial limit of the U.S. East Coast. Rail traffic into Windsor, Ontario, just across the Detroit River from the eponymous city, exploded, as boxcar after boxcar unloaded cases of clinking glass bottles onto boats bound for…well, who knows? Naturally, with billions and billions of dollars of tax-free income to be had, it didn’t take long for criminals to stake their claims to it. What was different, and deeply appalling to the moralistic champions of Prohibition, is that a substantial portion of the population who opposed Prohibition did not despise them, but rather respected them as making their “money by supplying a public demand”, in the words of one Alphonse Capone, whose public relations machine kept him in the public eye.

As the absurdity of the almost universal scorn and disobedience of Prohibition grew (at least among the urban chattering classes, which increasingly dominated journalism and politics at the time), opinion turned toward ways to undo its increasingly evident pernicious consequences. Many focussed upon amending the Volstead Act to exempt beer and light wines from the definition of “intoxicating liquors”—this would open a safety valve, and at least allow recovery of the devastated legal winemaking and brewing industries. The difficulty of actually repealing the Eighteenth Amendment deterred many of the most ardent supporters of that goal. As late as September 1930, Senator Morris Sheppard, who drafted the Eighteenth Amendment, said “There is a much chance of repealing the Eighteenth Amendment as there is for a hummingbird to fly to the planet Mars with the Washington Monument tied to its tail.”

But when people have had enough (I mean, of intrusive government, not illicit elixir), it’s amazing what they can motivate a hummingbird to do! Less than two years later, the Twenty-first Amendment, repealing Prohibition, was passed by the Congress, and on December 5th, 1933, it was ratified by the 36th state (appropriately, but astonishingly, Utah), thus putting an end to what had not only become generally seen as a farce, but also a direct cause of sanguinary lawlessness and scorn for the rule of law. The cause of repeal was greatly aided not only by the thirst of the populace, but also by the thirst of their government for revenue, which had collapsed due to plunging income tax receipts as the Great Depression deepened, along with falling tariff income as international trade contracted. Reinstating liquor excise taxes and collecting corporate income tax from brewers, winemakers, and distillers could help ameliorate the deficits from New Deal spending programs.

In many ways, the adoption and repeal of Prohibition represented a phase transition in the relationship between the federal government and its citizens. In its adoption, they voted, by the most difficult of constitutional standards, to enable direct enforcement of individual behaviour by the national government, complete with its own police force independent of state and local control. But at least they acknowledged that this breathtaking change could only be accomplished by a direct revision of the fundamental law of the republic, and that reversing it would require the same—a constitutional amendment, duly proposed and ratified. In the years that followed, the federal government used its power to tax (many partisans of Repeal expected the Sixteenth Amendment to also be repealed but, alas, this was not to be) to promote and deter all kinds of behaviour through tax incentives and charges, and before long the federal government was simply enacting legislation which directly criminalised individual behaviour without a moment’s thought about its constitutionality, and those who challenged it were soon considered nutcases.

As the United States increasingly comes to resemble a continental scale theatre of the absurd, there may be a lesson to be learnt from the final days of Prohibition. When something is unsustainable, it won’t be sustained. It’s almost impossible to predict when the breaking point will come—recall the hummingbird with the Washington Monument in tow—but when things snap, it doesn’t take long for the unimaginable new to supplant the supposedly secure status quo. Think about this when you contemplate issues such as immigration, the Euro, welfare state spending, bailouts of failed financial institutions and governments, and the multitude of big and little prohibitions and intrusions into personal liberty of the pervasive nanny state—and root for the hummingbird.

In the Kindle edition, all of the photographic illustrations are collected at the very end of the book, after the index—don’t overlook them.

Okrent, Daniel. Last Call: The Rise and Fall of Prohibition. New York: Scribner, 2010. ISBN 978-0-7432-7704-4.

A Ken Burns three-part documentary series based upon this book was produced by the Public Broadcasting Service in the U.S.  It is available on DVD from Amazon, and on streaming services such as iTunes, Netflix, and Amazon video (streaming availability depends upon your location—content varies from country to country).  For the moment, until somebody discovers and takes it down, the first two episodes may be viewed on YouTube.

7+

Users who have liked this post:

  • avatar
  • avatar
  • avatar