The nature of time has perplexed philosophers and scientists from the ancient Greeks (and probably before) to the present day. Despite two and half millennia of reflexion upon the problem and spectacular success in understanding many other aspects of the universe we inhabit, not only has little progress been made on the question of time, but to a large extent we are still puzzling over the same problems which vexed thinkers in the time of Socrates: Why does there seem to be an inexorable arrow of time which can be perceived in physical processes (you can scramble an egg, but just try to unscramble one)? Why do we remember the past, but not the future? Does time flow by us, living in an eternal present, or do we move through time? Do we have free will, or is that an illusion and is the future actually predestined? Can we travel to the past or to the future? If we are typical observers in an eternal or very long-persisting universe, why do we find ourselves so near its beginning (the big bang)?
Indeed, what we have learnt about time makes these puzzles even more enigmatic. For it appears, based both on theory and all experimental evidence to date, that the microscopic laws of physics are completely reversible in time: any physical process can (and does) go in both the forward and reverse time directions equally well. (Actually, it’s a little more complicated than that: just reversing the direction of time does not yield identical results, but simultaneously reversing the direction of time [T], interchanging left and right [parity: P], and swapping particles for antiparticles [charge: C] yields identical results under the so-called “CPT symmetry” which, as far is known, is absolute. The tiny violation of time reversal symmetry by itself in weak interactions seems, to most physicists, inadequate to explain the perceived unidirectional arrow of time, although some disagree.)... [Read More]
Prior to the 1920s, most aircraft pilots had no means of escape in case of mechanical failure or accident. During World War I, one out of every eight combat pilots was shot down or killed in a crash. Germany experimented with cumbersome parachutes stored in bags in a compartment behind the pilot, but these often failed to deploy properly if the plane was in a spin or became tangled in the aircraft structure after deployment. Still, they did save the lives of a number of German pilots. (On the other hand, one of them was Hermann Göring.) Allied pilots were not issued parachutes because their commanders feared the loss of planes more than pilots, and worried pilots would jump rather than try to save a damaged plane.
From the start of World War II, military aircrews were routinely issued parachutes, and backpack or seat pack parachutes with ripcord deployment had become highly reliable. As the war progressed and aircraft performance rapidly increased, it became clear that although parachutes could save air crew, physically escaping from a damaged plane at high velocities and altitudes was a formidable problem. The U.S. P-51 Mustang, of which more than 15,000 were built, cruised at 580 km/hour and had a maximum speed of 700 km/hour. It was physically impossible for a pilot to escape from the cockpit into such a wind blast, and even if they managed to do so, they would likely be torn apart by collision with the fuselage or tail an instant later. A pilot’s only hope was that the plane would slow to a speed at which escape was possible before crashing into the ground, bursting into flames, or disintegrating.... [Read More]
Before electronic computers had actually been built, Alan Turing mathematically proved a fundamental and profound property of them which has been exploited in innumerable ways as they developed and became central to many of our technologies and social interactions. A computer of sufficient complexity, which is, in fact, not very complex at all, can simulate any other computer or, in fact, any deterministic physical process whatsoever, as long as it is understood sufficiently to model in computer code and the system being modelled does not exceed the capacity of the computer—or the patience of the person running the simulation. Indeed, some of the first applications of computers were in modelling physical processes such as the flight of ballistic projectiles and the hydrodynamics of explosions. Today, computer modelling and simulation have become integral to the design process for everything from high-performance aircraft to toys, and many commonplace objects in the modern world could not have been designed without the aid of computer modelling. It certainly changed my life.
Almost as soon as there were computers, programmers realised that their ability to simulate, well…anything made them formidable engines for playing games. Computer gaming was originally mostly a furtive and disreputable activity, perpetrated by gnome-like programmers on the graveyard shift while the computer was idle, having finished the “serious” work paid for by unimaginative customers (who actually rose before the crack of noon!). But as the microelectronics revolution slashed the size and price of computers to something individuals could afford for their own use (or, according to the computer Puritans of the previous generations, abuse), computer gaming came into its own. Some modern computer games have production and promotion budgets larger than Hollywood movies, and their characters and story lines have entered the popular culture. As computer power has grown exponentially, games have progressed from tic-tac-toe, through text-based adventures, simple icon character video games, to realistic three dimensional simulated worlds in which the players explore a huge world, interact with other human players and non-player characters (endowed with their own rudimentary artificial intelligence) within the game, and in some games and simulated worlds, have the ability to extend the simulation by building their own objects with which others can interact. If your last experience with computer games was the Colossal Cave Adventure or Pac-Man, try a modern game or virtual world—you may be amazed.... [Read More]
In 1966, the author graduated from Boston University with a bachelor’s degree in mathematics. He had no immediate job prospects or career plans. He thought he might be interested in computer programming due to a love of solving puzzles, but he had never programmed a computer. When asked, in one of numerous job interviews, how he would go about writing a program to alphabetise a list of names, he admitted he had no idea. One day, walking home from yet another interview, he passed an unimpressive brick building with a sign identifying it as the “MIT Instrumentation Laboratory”. He’d heard a little about the place and, on a lark, walked in and asked if they were hiring. The receptionist handed him a long application form, which he filled out, and was then immediately sent to interview with a personnel officer. Eyles was amazed when the personnel man seemed bent on persuading him to come to work at the Lab. After reference checking, he was offered a choice of two jobs: one in the “analysis group” (whatever that was), and another on the team developing computer software for landing the Apollo Lunar Module (LM) on the Moon. That sounded interesting, and the job had another benefit attractive to a 21 year old just graduating from university: it came with deferment from the military draft, which was going into high gear as U.S. involvement in Vietnam deepened.
Near the start of the Apollo project, MIT’s Instrumentation Laboratory, led by the legendary “Doc” Charles Stark Draper, won a sole source contract to design and program the guidance system for the Apollo spacecraft, which came to be known as the “Apollo Primary Guidance, Navigation, and Control System” (PGNCS, pronounced “pings”). Draper and his laboratory had pioneered inertial guidance systems for aircraft, guided missiles, and submarines, and had in-depth expertise in all aspects of the challenging problem of enabling the Apollo spacecraft to navigate from the Earth to the Moon, land on the Moon, and return to the Earth without any assistance from ground-based assets. In a normal mission, it was expected that ground-based tracking and computers would assist those on board the spacecraft, but in the interest of reliability and redundancy it was required that completely autonomous navigation would permit accomplishing the mission.... [Read More]
This will be a somewhat different installment of Saturday Night Science. Rather than discussing a book or news related to science and technology, this time, motivated by having recently read and reviewed Edward Snowden’s Permanent Record, I’m going to survey some of the tools individuals can use to attempt to reclaim a bit of their privacy in the face of ubiquitous mass surveillance by governments and technology companies. This is not intended to be an encyclopedic survey of the field, which is vast, complicated, and constantly changing. Instead, this is an introduction intended to point readers toward tools and approaches, many of which I have used myself, discuss trade-offs between security and convenience, and provide links for further research. The various topics are largely independent of one another, and are discussed in no particular order.
Private Web Browsing
At this writing, the most widely used Web browser is Google’s Chrome, with a market share around 65% which is expected to grow to more than 70% by the end of 2019. Chrome is famous for “phoning home”: every site you visit, link you follow, search you perform, and choice you make from the suggestions it so helpfully provides you is potentially reported back to Google headquarters. This is stored in a dossier maintained about you, especially if you have, as you’re encouraged to, signed the browser in to your Google Account. That’s how they manage to show you advertisements so exquisitely (or sometimes humorously) targeted based upon your online activity. But you don’t have to be paranoid to worry about the consequences of, dare I say, such a permanent record being used against you should you come to the attention of the enforcers of good-think who abound in Silicon Valley.... [Read More]
This book is volume four in the author’s Incerto series, following Fooled by Randomness, The Black Swan, and Antifragile. In it, he continues to explore the topics of uncertainty, risk, decision making under such circumstances, and how both individuals and societies winnow out what works from what doesn’t in order to choose wisely among the myriad alternatives available.
The title, “Skin in the Game”, is an aphorism which refers to an individual’s sharing the risks and rewards of an undertaking in which they are involved. This is often applied to business and finance, but it is, as the author demonstrates, a very general and powerful concept. An airline pilot has skin in the game along with the passengers. If the plane crashes and kills everybody on board, the pilot will die along with them. This insures that the pilot shares the passengers’ desire for a safe, uneventful trip and inspires confidence among them. A government “expert” putting together a “food pyramid” to be vigorously promoted among the citizenry and enforced upon captive populations such as school children or members of the armed forces, has no skin in the game. If his or her recommendations create an epidemic of obesity, type 2 diabetes, and cardiovascular disease, that probably won’t happen until after the “expert” has retired and, in any case, civil servants are not fired or demoted based upon the consequences of their recommendations.... [Read More]
On November 5, 1958, NASA, only four months old at the time, created the Space Task Group (STG) to manage its manned spaceflight programs. Although there had been earlier military studies of manned space concepts and many saw eventual manned orbital flights growing out of the rocket plane projects conducted by NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA) and the U.S. Air Force, at the time of the STG’s formation the U.S. had no formal manned space program. The initial group numbered 45 in all, including eight secretaries and “computers”—operators of electromechanical desk calculators, staffed largely with people from the NACA’s Langley Research Center and initially headquartered there. There were no firm plans for manned spaceflight, no budget approved to pay for it, no spacecraft, no boosters, no launch facilities, no mission control centre, no astronauts, no plans to select and train them, and no experience either with human flight above the Earth’s atmosphere or with more than a few seconds of weightlessness. And yet this team, the core of an effort which would grow to include around 400,000 people at NASA and its 20,000 industry and academic contractors, would, just ten years and nine months later, on July 20th, 1969, land two people on the surface of the Moon and then return them safely to the Earth.
Ten years is not a long time when it comes to accomplishing a complicated technological project. Development of the Boeing 787, a mid-sized commercial airliner which flew no further, faster, or higher than its predecessors, and was designed and built using computer-aided design and manufacturing technologies, took eight years from project launch to entry into service, and the F-35 fighter plane only entered service and then only in small numbers of one model a full twenty-three years after the start of its development.... [Read More]
One of the most fundamental deductions Albert Einstein made from the finite speed of light in his theory of special relativity is the relativity of simultaneity—because light takes a finite time to traverse a distance in space, it is not possible to define simultaneity with respect to a universal clock shared by all observers. In fact, purely due to their locations in space, two observers may disagree about the order in which two spatially separated events occurred. It is only because the speed of light is so great compared to distances we are familiar with in everyday life that this effect seems unfamiliar to us. Note that the relativity of simultaneity can be purely due to the finite speed of light; while it is usually discussed in conjunction with special relativity and moving observers, it can be observed in situations where none of the other relativistic effects are present. The following animation demonstrates the effect.
... [Read More]
Fifty years ago, with the successful landing of Apollo 11 on the Moon, it appeared that the road to the expansion of human activity from its cradle on Earth into the immensely larger arena of the solar system was open. The infrastructure built for Project Apollo, including that in the original 1963 development plan for the Merritt Island area could support Saturn V launches every two weeks. Equipped with nuclear-powered upper stages (under active development by Project NERVA, and accommodated in plans for a Nuclear Assembly Building near the Vehicle Assembly Building), the launchers and support facilities were more than adequate to support construction of a large space station in Earth orbit, a permanently-occupied base on the Moon, exploration of near-Earth asteroids, and manned landings on Mars in the 1980s.
But this was not to be. Those envisioning this optimistic future fundamentally misunderstood the motivation for Project Apollo. It was not about, and never was about, opening the space frontier. Instead, it was a battle for prestige in the Cold War and, once won (indeed, well before the Moon landing), the budget necessary to support such an extravagant program (which threw away skyscraper-sized rockets with every launch), began to evaporate. NASA was ready to do the Buck Rogers stuff, but Washington wasn’t about to come up with the bucks to pay for it. In 1965 and 1966, the NASA budget peaked at over 4% of all federal government spending. By calendar year 1969, when Apollo 11 landed on the Moon, it had already fallen to 2.31% of the federal budget, and with relatively small year to year variations, has settled at around one half of one percent of the federal budget in recent years. Apart from a small band of space enthusiasts, there is no public clamour for increasing NASA’s budget (which is consistently over-estimated by the public as a much larger fraction of federal spending than it actually receives), and there is no prospect for a political consensus emerging to fund an increase.... [Read More]
In the closing years of the nineteenth century, one of those nagging little discrepancies vexing physicists was the behaviour of the photoelectric effect. Originally discovered in 1887, the phenomenon causes certain metals, when illuminated by light, to absorb the light and emit electrons. The perplexing point was that there was a minimum wavelength (colour of light) necessary for electron emission, and for longer wavelengths, no electrons would be emitted at all, regardless of the intensity of the beam of light. For example, a certain metal might emit electrons when illuminated by green, blue, violet, and ultraviolet light, with the intensity of electron emission proportional to the light intensity, but red or yellow light, regardless of how intense, would not result in a single electron being emitted.
This didn’t make any sense. According to Maxwell’s wave theory of light, which was almost universally accepted and had passed stringent experimental tests, the energy of light depended upon the amplitude of the wave (its intensity), not the wavelength (or, reciprocally, its frequency). And yet the photoelectric effect didn’t behave that way—it appeared that whatever was causing the electrons to be emitted depended on the wavelength of the light, and what’s more, there was a sharp cut-off below which no electrons would be emitted at all.... [Read More]
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.
Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.... [Read More]
(Saturday Night Science
usually appears on the first Saturday of the month. I have moved up the January 2019 edition one week to discuss the New Horizons
spacecraft fly-by of Kuiper belt object 2014 MU69
, “Ultima Thule”, on New Year’s Day, January 1st, 2019.)
In January 2006 the New Horizons spacecraft was launched to explore Pluto and its moons and, if all went well, proceed onward to another object in the Kuiper Belt of the outer solar system, Pluto being one of the largest, closest, and best known members. New Horizons was the first spacecraft launched from Earth directly on a solar system escape (interstellar) trajectory (the Pioneer and Voyager probes had earlier escaped the solar system, but only with the help of gravity assists from Jupiter and Saturn). It was launched from Earth with such velocity (16.26 km/sec) that it passed the Moon’s orbit in just nine hours, a distance that took the Apollo missions three days to traverse.
In February 2007, New Horizons flew by Jupiter at a distance of 2.3 million km, using the planet’s gravity to increase its speed to 23 km/sec, thereby knocking three years off its transit time to Pluto. While passing through the Jupiter system, it used its instruments to photograph the planet and its moons. There were no further encounters with solar system objects until arrival at Pluto in 2015, and the spacecraft spent most of its time in hibernation, with most systems powered down to extend their lives, reduce staffing requirements for the support team on Earth, and free up the NASA Deep Space Network to support other missions.... [Read More]
As the tumultuous year 1968 drew to a close, NASA faced a serious problem with the Apollo project. The Apollo missions had been carefully planned to test the Saturn V booster rocket and spacecraft (Command/Service Module [CSM] and Lunar Module [LM]) in a series of increasingly ambitious missions, first in low Earth orbit (where an immediate return to Earth was possible in case of problems), then in an elliptical Earth orbit which would exercise the on-board guidance and navigation systems, followed by lunar orbit, and finally proceeding to the first manned lunar landing. The Saturn V had been tested in two unmanned “A” missions: Apollo 4 in November 1967 and Apollo 6 in April 1968. Apollo 5 was a “B” mission, launched on a smaller Saturn 1B booster in January 1968, to test an unmanned early model of the Lunar Module in low Earth orbit, primarily to verify the operation of its engines and separation of the descent and ascent stages. Apollo 7, launched in October 1968 on a Saturn 1B, was the first manned flight of the Command and Service modules and tested them in low Earth orbit for almost 11 days in a “C” mission.
Apollo 8 was planned to be the “D” mission, in which the Saturn V, in its first manned flight, would launch the Command/Service and Lunar modules into low Earth orbit, where the crew, commanded by Gemini veteran James McDivitt, would simulate the maneuvers of a lunar landing mission closer to home. McDivitt’s crew was trained and ready to go in December 1968. Unfortunately, the lunar module wasn’t. The lunar module scheduled for Apollo 8, LM-3, had been delivered to the Kennedy Space Center in June of 1968, but was, to put things mildly, a mess. Testing at the Cape discovered more than a hundred serious defects, and by August it was clear that there was no way LM-3 would be ready for a flight in 1968. In fact, it would probably slip to February or March 1969. This, in turn, would push the planned “E” mission, for which the crew of commander Frank Borman, command module pilot James Lovell, and lunar module pilot William Anders were training, aimed at testing the Command/Service and Lunar modules in an elliptical Earth orbit venturing as far as 7400 km from the planet and originally planned for March 1969, three months later, to June, delaying all subsequent planned missions and placing the goal of landing before the end of 1969 at risk.... [Read More]
At age eleven, in 1861, young Oliver Heaviside’s family, supported by his father’s irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family’s house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid’s geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.
At age fifteen he took the College of Perceptors examination, the equivalent of today’s A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.... [Read More]
In his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.
Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.... [Read More]