This book is volume four in the author’s Incerto series, following Fooled by Randomness, The Black Swan, and Antifragile. In it, he continues to explore the topics of uncertainty, risk, decision making under such circumstances, and how both individuals and societies winnow out what works from what doesn’t in order to choose wisely among the myriad alternatives available.
The title, “Skin in the Game”, is an aphorism which refers to an individual’s sharing the risks and rewards of an undertaking in which they are involved. This is often applied to business and finance, but it is, as the author demonstrates, a very general and powerful concept. An airline pilot has skin in the game along with the passengers. If the plane crashes and kills everybody on board, the pilot will die along with them. This insures that the pilot shares the passengers’ desire for a safe, uneventful trip and inspires confidence among them. A government “expert” putting together a “food pyramid” to be vigorously promoted among the citizenry and enforced upon captive populations such as school children or members of the armed forces, has no skin in the game. If his or her recommendations create an epidemic of obesity, type 2 diabetes, and cardiovascular disease, that probably won’t happen until after the “expert” has retired and, in any case, civil servants are not fired or demoted based upon the consequences of their recommendations.... [Read More]
On November 5, 1958, NASA, only four months old at the time, created the Space Task Group (STG) to manage its manned spaceflight programs. Although there had been earlier military studies of manned space concepts and many saw eventual manned orbital flights growing out of the rocket plane projects conducted by NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA) and the U.S. Air Force, at the time of the STG’s formation the U.S. had no formal manned space program. The initial group numbered 45 in all, including eight secretaries and “computers”—operators of electromechanical desk calculators, staffed largely with people from the NACA’s Langley Research Center and initially headquartered there. There were no firm plans for manned spaceflight, no budget approved to pay for it, no spacecraft, no boosters, no launch facilities, no mission control centre, no astronauts, no plans to select and train them, and no experience either with human flight above the Earth’s atmosphere or with more than a few seconds of weightlessness. And yet this team, the core of an effort which would grow to include around 400,000 people at NASA and its 20,000 industry and academic contractors, would, just ten years and nine months later, on July 20th, 1969, land two people on the surface of the Moon and then return them safely to the Earth.
Ten years is not a long time when it comes to accomplishing a complicated technological project. Development of the Boeing 787, a mid-sized commercial airliner which flew no further, faster, or higher than its predecessors, and was designed and built using computer-aided design and manufacturing technologies, took eight years from project launch to entry into service, and the F-35 fighter plane only entered service and then only in small numbers of one model a full twenty-three years after the start of its development.... [Read More]
One of the most fundamental deductions Albert Einstein made from the finite speed of light in his theory of special relativity is the relativity of simultaneity—because light takes a finite time to traverse a distance in space, it is not possible to define simultaneity with respect to a universal clock shared by all observers. In fact, purely due to their locations in space, two observers may disagree about the order in which two spatially separated events occurred. It is only because the speed of light is so great compared to distances we are familiar with in everyday life that this effect seems unfamiliar to us. Note that the relativity of simultaneity can be purely due to the finite speed of light; while it is usually discussed in conjunction with special relativity and moving observers, it can be observed in situations where none of the other relativistic effects are present. The following animation demonstrates the effect.
Fifty years ago, with the successful landing of Apollo 11 on the Moon, it appeared that the road to the expansion of human activity from its cradle on Earth into the immensely larger arena of the solar system was open. The infrastructure built for Project Apollo, including that in the original 1963 development plan for the Merritt Island area could support Saturn V launches every two weeks. Equipped with nuclear-powered upper stages (under active development by Project NERVA, and accommodated in plans for a Nuclear Assembly Building near the Vehicle Assembly Building), the launchers and support facilities were more than adequate to support construction of a large space station in Earth orbit, a permanently-occupied base on the Moon, exploration of near-Earth asteroids, and manned landings on Mars in the 1980s.
But this was not to be. Those envisioning this optimistic future fundamentally misunderstood the motivation for Project Apollo. It was not about, and never was about, opening the space frontier. Instead, it was a battle for prestige in the Cold War and, once won (indeed, well before the Moon landing), the budget necessary to support such an extravagant program (which threw away skyscraper-sized rockets with every launch), began to evaporate. NASA was ready to do the Buck Rogers stuff, but Washington wasn’t about to come up with the bucks to pay for it. In 1965 and 1966, the NASA budget peaked at over 4% of all federal government spending. By calendar year 1969, when Apollo 11 landed on the Moon, it had already fallen to 2.31% of the federal budget, and with relatively small year to year variations, has settled at around one half of one percent of the federal budget in recent years. Apart from a small band of space enthusiasts, there is no public clamour for increasing NASA’s budget (which is consistently over-estimated by the public as a much larger fraction of federal spending than it actually receives), and there is no prospect for a political consensus emerging to fund an increase.... [Read More]
In the closing years of the nineteenth century, one of those nagging little discrepancies vexing physicists was the behaviour of the photoelectric effect. Originally discovered in 1887, the phenomenon causes certain metals, when illuminated by light, to absorb the light and emit electrons. The perplexing point was that there was a minimum wavelength (colour of light) necessary for electron emission, and for longer wavelengths, no electrons would be emitted at all, regardless of the intensity of the beam of light. For example, a certain metal might emit electrons when illuminated by green, blue, violet, and ultraviolet light, with the intensity of electron emission proportional to the light intensity, but red or yellow light, regardless of how intense, would not result in a single electron being emitted.
This didn’t make any sense. According to Maxwell’s wave theory of light, which was almost universally accepted and had passed stringent experimental tests, the energy of light depended upon the amplitude of the wave (its intensity), not the wavelength (or, reciprocally, its frequency). And yet the photoelectric effect didn’t behave that way—it appeared that whatever was causing the electrons to be emitted depended on the wavelength of the light, and what’s more, there was a sharp cut-off below which no electrons would be emitted at all.... [Read More]
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.
Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.... [Read More]
(Saturday Night Science usually appears on the first Saturday of the month. I have moved up the January 2019 edition one week to discuss the New Horizons spacecraft fly-by of Kuiper belt object 2014 MU69, “Ultima Thule”, on New Year’s Day, January 1st, 2019.)
In January 2006 the New Horizons spacecraft was launched to explore Pluto and its moons and, if all went well, proceed onward to another object in the Kuiper Belt of the outer solar system, Pluto being one of the largest, closest, and best known members. New Horizons was the first spacecraft launched from Earth directly on a solar system escape (interstellar) trajectory (the Pioneer and Voyager probes had earlier escaped the solar system, but only with the help of gravity assists from Jupiter and Saturn). It was launched from Earth with such velocity (16.26 km/sec) that it passed the Moon’s orbit in just nine hours, a distance that took the Apollo missions three days to traverse.
In February 2007, New Horizons flew by Jupiter at a distance of 2.3 million km, using the planet’s gravity to increase its speed to 23 km/sec, thereby knocking three years off its transit time to Pluto. While passing through the Jupiter system, it used its instruments to photograph the planet and its moons. There were no further encounters with solar system objects until arrival at Pluto in 2015, and the spacecraft spent most of its time in hibernation, with most systems powered down to extend their lives, reduce staffing requirements for the support team on Earth, and free up the NASA Deep Space Network to support other missions.... [Read More]
As the tumultuous year 1968 drew to a close, NASA faced a serious problem with the Apollo project. The Apollo missions had been carefully planned to test the Saturn V booster rocket and spacecraft (Command/Service Module [CSM] and Lunar Module [LM]) in a series of increasingly ambitious missions, first in low Earth orbit (where an immediate return to Earth was possible in case of problems), then in an elliptical Earth orbit which would exercise the on-board guidance and navigation systems, followed by lunar orbit, and finally proceeding to the first manned lunar landing. The Saturn V had been tested in two unmanned “A” missions: Apollo 4 in November 1967 and Apollo 6 in April 1968. Apollo 5 was a “B” mission, launched on a smaller Saturn 1B booster in January 1968, to test an unmanned early model of the Lunar Module in low Earth orbit, primarily to verify the operation of its engines and separation of the descent and ascent stages. Apollo 7, launched in October 1968 on a Saturn 1B, was the first manned flight of the Command and Service modules and tested them in low Earth orbit for almost 11 days in a “C” mission.
Apollo 8 was planned to be the “D” mission, in which the Saturn V, in its first manned flight, would launch the Command/Service and Lunar modules into low Earth orbit, where the crew, commanded by Gemini veteran James McDivitt, would simulate the maneuvers of a lunar landing mission closer to home. McDivitt’s crew was trained and ready to go in December 1968. Unfortunately, the lunar module wasn’t. The lunar module scheduled for Apollo 8, LM-3, had been delivered to the Kennedy Space Center in June of 1968, but was, to put things mildly, a mess. Testing at the Cape discovered more than a hundred serious defects, and by August it was clear that there was no way LM-3 would be ready for a flight in 1968. In fact, it would probably slip to February or March 1969. This, in turn, would push the planned “E” mission, for which the crew of commander Frank Borman, command module pilot James Lovell, and lunar module pilot William Anders were training, aimed at testing the Command/Service and Lunar modules in an elliptical Earth orbit venturing as far as 7400 km from the planet and originally planned for March 1969, three months later, to June, delaying all subsequent planned missions and placing the goal of landing before the end of 1969 at risk.... [Read More]
At age eleven, in 1861, young Oliver Heaviside’s family, supported by his father’s irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family’s house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid’s geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.
At age fifteen he took the College of Perceptors examination, the equivalent of today’s A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.... [Read More]
In his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.
Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.... [Read More]
On February 24, 1968, Soviet Golf class submarine K-129 sailed from its base in Petropavlovsk for a routine patrol in the Pacific Ocean. These ballistic missile submarines were, at the time, a key part of the Soviet nuclear deterrent. Each carried three SS-N-5 missiles armed with one 800 kiloton nuclear warhead per missile. This was an intermediate range missile which could hit targets inside an enemy country if the submarine approached sufficiently close to the coast. For defence and attacking other ships, Golf class submarines carried two torpedoes with nuclear warheads as well as conventional high explosive warhead torpedoes.... [Read More]
Ever since the time of Galileo, the history of astronomy has been punctuated by a series of “great debates”—disputes between competing theories of the organisation of the universe which observation and experiment using available technology are not yet able to resolve one way or another. In Galileo’s time, the great debate was between the Ptolemaic model, which placed the Earth at the centre of the solar system (and universe) and the competing Copernican model which had the planets all revolving around the Sun. Both models worked about as well in predicting astronomical phenomena such as eclipses and the motion of planets, and no observation made so far had been able to distinguish them.
Then, in 1610, Galileo turned his primitive telescope to the sky and observed the bright planets Venus and Jupiter. He found Venus to exhibit phases, just like the Moon, which changed over time. This would not happen in the Ptolemaic system, but is precisely what would be expected in the Copernican model—where Venus circled the Sun in an orbit inside that of Earth. Turning to Jupiter, he found it to be surrounded by four bright satellites (now called the Galilean moons) which orbited the giant planet. This further falsified Ptolemy’s model, in which the Earth was the sole source of attraction around which all celestial bodies revolved. Since anybody could build their own telescope and confirm these observations, this effectively resolved the first great debate in favour of the Copernican heliocentric model, although some hold-outs in positions of authority resisted its dethroning of the Earth as the centre of the universe.... [Read More]
The drawing of blood for laboratory tests is one of my least favourite parts of a routine visit to the doctor’s office. Now, I have no fear of needles and hardly notice the stick, but frequently the doctor’s assistant who draws the blood (whom I’ve nicknamed Vampira) has difficulty finding the vein to get a good flow and has to try several times. On one occasion she made an internal puncture which resulted in a huge, ugly bruise that looked like I’d slammed a car door on my arm. I wondered why they need so much blood, and why draw it into so many different containers? (Eventually, I researched this, having been intrigued by the issue during the O. J. Simpson trial; if you’re curious, here is the information.) Then, after the blood is drawn, it has to be sent off to the laboratory, which sends back the results days later. If something pops up in the test results, you have to go back for a second visit with the doctor to discuss it.
Wouldn’t it be great if they could just stick a fingertip and draw a drop or two of blood, as is done by diabetics to test blood sugar, then run all the tests on it? Further, imagine if, after taking the drop of blood, it could be put into a desktop machine right in the doctor’s office which would, in a matter of minutes, produce test results you could discuss immediately with the doctor. And if such a technology existed and followed the history of decline in price with increase in volume which has characterised other high technology products since the 1970s, it might be possible to deploy the machines into the homes of patients being treated with medications so their effects could be monitored and relayed directly to their physicians in case an anomaly was detected. It wouldn’t quite be a Star Trek medical tricorder, but it would be one step closer. With the cost of medical care rising steeply, automating diagnostic blood tests and bringing them to the mass market seemed an excellent candidate as the “next big thing” for Silicon Valley to revolutionise.... [Read More]
There are few questions in science as simple to state and profound in their implications as “are we alone?”—are humans the only species with a technological civilisation in the galaxy, or in the universe? This has been a matter of speculation by philosophers, theologians, authors of fiction, and innumerable people gazing at the stars since antiquity, but it was only in the years after World War II, which had seen the development of high-power microwave transmitters and low-noise receivers for radar, that it dawned upon a few visionaries that this had now become a question which could be scientifically investigated.
The propagation of radio waves through the atmosphere and the interstellar medium is governed by basic laws of physics, and the advent of radio astronomy demonstrated that many objects in the sky, some very distant, could be detected in the microwave spectrum. But if we were able to detect these natural sources, suppose we connected a powerful transmitter to our radio telescope and sent a signal to a nearby star? It was easy to calculate that, given the technology of the time (around 1960), existing microwave transmitters and radio telescopes could transmit messages across interstellar distances.... [Read More]
!RAWBLOCK0! Click the title of this post to see the interactive simulation.
The display above shows, from three different physical perspectives, the orbit of a low-mass test particle, the small red circle, around a non-rotating black hole (represented by a grey circle in the panel at the right), where the radius of the circle is the black hole’s gravitational radius, or event horizon. Kepler’s laws of planetary motion, grounded in Newton’s theory of gravity, state that the orbit of a test particle around a massive object is an ellipse with one focus at the centre of the massive object. But when gravitational fields are strong, as is the case for collapsed objects like neutron stars and black holes, Newton’s theory is inaccurate; calculations must be done using Einstein’s theory of General Relativity.... [Read More]