The “Groups” facility in Ratburger.org is based upon the Group feature of BuddyPress, which is a plug-in (or more precisely, bolt-on) to WordPress which was intended to turn what was originally blogging software into a crude kind of social network, with emphasis on “crude”. BuddyPress can best be thought of as a kludge hanging in bag crookedly nailed to the side of the hack which is WordPress. Much of the work expended in software development since the launch of Ratburger has been in fixing outright flaws and limitations of BuddyPress. Raw BuddyPress is something to behold: group posts and comments, once posted, cannot be edited or deleted, except by an administrator, and there is near complete opacity about what is going on, with notifications completely haphazard.
The whole Groups facility is a hack. The way a discussion group add-on to WordPress should work is self-evident to anybody who gives it a few minutes’ thought: each group should be its own little site, with its own posts and comments, but with notifications confined to members. Posts could be promoted from groups to public pages by administrators. All of the composition, editing, and administration functions should be identical for the main site and groups.
What we have, of course, is nothing like that. Groups don’t work remotely like the main site, and users are constantly frustrated trying to do simple things in groups which are easy on the main site.
For example, consider including an image in your post or comment. On the main site, you just use the “Add Media” button, upload the image, and shazam, there it is! But in a group, you’ll look in vain for an Add Media button—groups were basically intended by the developers of BuddyPress as glorified text-only bulletin boards, and if you want to do something as 1995-era edgy as including an image in your post, you have to jump through hoops. Here are the details of the hoops, in case you remain undeterred.
First, upload your image to the Media Library. There’s no “Add Media” button, but you can open up another tab or window, go to the Dashboard (via the little thing that looks like a speedometer in the bar at the top left of the page), then select Media/Add New. This will display the familiar “Upload New Media” page, where you can select an image on your local computer and upload it to Ratburger. This does not include the image in your group post; it simply adds it to your Media Library.
Next, display the Media Library. In the sidebar, click Media/Library and you’ll see all images you’ve uploaded, with the most recent one at top left. Click it and you’ll see the image full sized. Make a note (copy and paste to an external text file) the following information about the image:
For example, for an image I uploaded some time ago, I’d note:
Alt Text: HDR image: total solar eclipse 2010-07-11
Now go to the group post where you wish to include the image. Starting on a line by itself, include an HTML img tag for the image like the following:
width="640" height="425" class="aligncenter"
alt="Alt Text: HDR image: total solar eclipse 2010-07-11" />
Replace the various fields with the information you’ve recorded for your image. The “aligncenter” specification will centre the image (what you usually want); you can also use “alignleft” or “alignright” if you know what you’re doing.
If your image is larger than will fit on the screen (for example, images from digital cameras), you’ll need to recalculate the width and height to rescale it to fit. You typically don’t want an image to be wider than 640 pixels, and 600 pixels is a good choice. Let’s assume you have a monster image which is 6016×4016 pixels (as produced by a Nikon D600 digital camera) and you wish it show it as 600 pixels wide. You’d specify width=“600” in the img tag, but then you need to calculate the height in order to preserve the shape (“aspect ratio”) of the image. To do this, multiply the original height by the new width divided by the original width, in this case:
4016 × (600 / 6016) ≈ 401
(round to the nearest integer), and then specify height=“401”.
When you publish your post or comment in the group, the image should now appear.
Why should something so conceptually simple as including an image in a discussion group require such contortions? Welcome to “the software that runs one third of the Web” (which is what they say, without adding the concluding phrase, “into the ground”). As I mock their download page:
Yesterday, by chance, reading involved two things: a chapter of history and a short story. Written by men living 2300 years apart, these describe the very same thing: the workings of the human heart, in particular at times of trial, and the results of those workings in terms of human suffering and survival. In the history, people lied to everyone about everything in an attempt to save their own skins, and failed, earning themselves sordid deaths. In the story, a man is led by his absolute devotion to truth at least to die with integrity after having behaved well.
Thucydides claims to have based his history on near reports, and to have fleshed it out with his own considered reconstructions of the speeches made by the great men on all sides during the Peloponnesian War. That’s fine; all well and good, but to read it is to scan multiple recursions of the same theme, here paraphrased:
The Plutonians sent forty ships to lay waste the lands of the Apricotians. The Apricotians did not submit, so the Plutonians slaughtered them all, burned the city, raised a trophy, and sailed home.
Then the reader arrives at Chapter X, “The Corcyrean Revolution,” to be startled awake on reading this:
The Corcyrean revolution began with the return of the prisoners taken in the sea-fights off Epidamnus . . . the accused, rendered desperate by law . . . banded together armed with daggers, and suddenly bursting into the senate killed Peithias and sixty others, senators and private persons . . .
After a day’s interval hostilities recommenced, victory remaining with the commons, [over the oligarchs] who had the advantage in numbers and position, the women also valiantly assisting them, pelting with tiles from the houses, and supporting the mêlée with a fortitude beyond their sex. Towards dusk, the oligarchs in full rout, fearing that the victorious commons might assault and carry the arsenal and put them to the sword, fired the houses round the market -place and the lodging-houses . . .
The Corcyreans, made aware of the approach of the Athenian fleet . . . slew such of their enemies as they laid hands on . . . Next they went to the sanctuary of Hera and persuaded about fifty men to take their trial, and condemned them all to death. The mass of the suppliants who had refused to do so, on seeing what was taking place, slew each other there in the consecrated ground; while some hanged themselves upon the trees, and others destroyed themselves as they were severally able. . . the Corcyreans were engaged in butchering those of their fellow-citizens whom they regarded as their enemies: and although the crime imputed was that of attempting to put down the democracy, some were slain also for private hatred, others by their debtors because of he monies owed to them. Death thus raged in every shape; and as usually happens at such times, there was no length to which violence did not go; sons were killed by their fathers, and suppliants dragged from the alter or slain upon it . . .
Now Thucydides moves from the particular to the general.
. . . struggles being everywhere made by the popular chiefs to bring in the Athenians, and by the oligarchs to introduce the Lacedaemonians. . . The sufferings which revolution entailed upon the cities were many and terrible, such as have occurred and always will occur, as long as the nature of mankind remains the same;
Too right, says the 20th-century reader, who now wonders if she is actually reading a news story:
. . . Words had to change their ordinary meaning and to take that which was now given them. Reckless audacity came to be considered the courage of a loyal ally; prudent hesitation, specious cowardice; moderation was held to be a cloak for unmanliness; ability to see all sides of a question inaptness to act on any. Frantic violence became the attribute of manliness; cautious plotting, a justifiable means of self-defense. The advocate of extrme measures was always trustworthy; his opponent a man to be suspected. To succeed in a plot was t0 have a shrewd head, to divine a plot still shrewder; but to try to provide against having to do either was to break up your party and to be afraid of your adversaries.
Stephen Vincent Benét’s 1937 short story The Blood of the Martyrs concerns an apolitical scientific researcher and professor, imprisoned in “the castle” by the soldiers of “The Dictator.” The Professor dispassionately assesses the near likelihood of his execution. He does not betray his students, who apparently have been self-organizing into a force in opposition to The Dictator – but he does not articulate to himself why he does not betray them despite beatings and condemnation to death.
Only at the very end, when The Dictator personally demands, in exchange for his life on terms, that he lie about science – do State Science, speak in scientific language in service to the State – does the Professor make his refusal. He does not spell it out for himself in his mind; he simply recalls the faces of his students who came to him over the years for one thing: truth, and the pursuit of truth.
He paused again, seeing their faces before him. . . From all over the world they had come – they wore cheap overcoats, they were hungry for knowledge, they ate the bad, starchy food of the poor restaurants . . . a few were promising – all must be given the truth. It did not matter if they died, but they must be given the truth. Otherwise there could be no continuity and no science.
. . . not to tell lies to young men on one’s own subject. . . .They had given him their terrible confidence – not for love or kindness, but because they had found him honest. It was too late to change.
The Professor will not lie for the State, even to save his life. His death is sordid only externally; internally his integrity gives him calm. He dies thinking of the young men to whom he has not lied.
So, some will lie, and participate in lies, in an attempt to evade murder, or merely to advance themselves. Other will refuse to lie, because to lie would be to commit painful betrayal to the highest value. For Benét’s character, it is not a matter of anguished calculation or conjecture. It just is so. That is the source of his personal courage: faithfulness to what is so.
The post I reverted to draft was called “A Quick Note”. I didn’t think it was fair to have it up because the person who it was about cannot respond to it. It was a site administration post and written by me. The policy here has been to be transparent about the actions taken therefore this post.
(Note: This is novel is the first of an envisioned four volume series titled Aristillus. It and the second book, Causes of Separation, published in May, 2018, together tell a single story which reaches a decisive moment just as the first book ends. Unusually, this will be a review of both novels, taken as a whole. If you like this kind of story at all, there’s no way you’ll not immediately plunge into the second book after setting down the first.)
Around the year 2050, collectivists were firmly in power everywhere on Earth. Nations were subordinated to the United Nations, whose force of Peace Keepers (PKs) had absorbed all but elite special forces, and were known for being simultaneously brutal, corrupt, and incompetent. (Due to the equality laws, military units had to contain a quota of “Alternatively Abled Soldiers” who other troops had to wheel into combat.) The United States still existed as a country, but after decades of rule by two factions of the Democrat party: Populist and Internationalist, was mired in stagnation, bureaucracy, crumbling infrastructure, and on the verge of bankruptcy. The U.S. President, Themba Johnson, a former talk show host who combined cluelessness, a volatile temper, and vulpine cunning when it came to manipulating public opinion, is confronted with all of these problems and looking for a masterstroke to get beyond the next election.
Around 2050, when the collectivists entered the inevitable end game their policies lead to everywhere they are tried, with the Bureau of Sustainable Research (BuSuR) suppressing new technologies in every field and the Construction Jobs Preservation Act and Bureau of Industrial Planning banning anything which might increase productivity, a final grasp to loot the remaining seed corn resulted in the CEO Trials aimed at the few remaining successful companies, with expropriation of their assets and imprisonment of their leaders. CEO Mike Martin manages to escape from prison and link up with renegade physicist Ponnala (“Ponzie”) Srinivas, inventor of an anti-gravity drive he doesn’t want the slavers to control. Mike buys a rustbucket oceangoing cargo ship, equips it with the drive, an airtight compartment and life support, and flees Earth with a cargo of tunnel boring machines and water to exile on the Moon, in the crater Aristillus in Mare Imbrium on the lunar near side where, fortuitously, the impact of a metal-rich asteroid millions of years ago enriched the sub-surface with metals rare in the Moon’s crust.
Let me say a few words about the anti-gravity drive, which is very unusual and original, and whose properties play a significant role in the story. The drive works by coupling to the gravitational field of a massive body and then pushing against it, expending energy as it rises and gains gravitational potential energy. Momentum is conserved, as an equal and opposite force is exerted on the massive body against which it is pushing. The force vector is always along the line connecting the centre of mass of the massive body and the drive unit, directed away from the centre of mass. The force is proportional to the strength of the gravitational field in which the drive is operating, and hence stronger when pushing against a body like Earth as opposed to a less massive one like the Moon. The drive’s force diminishes with distance from the massive body as its gravitational field falls off with the inverse square law, and hence the drive generates essentially no force when in empty space far from a gravitating body. When used to brake a descent toward a massive body, the drive converts gravitational potential energy into electricity like the regenerative braking system of an electric vehicle: energy which can be stored for use when later leaving the body.
Because the drive can only push outward radially, when used to, say, launch from the Earth to the Moon, it is much like Jules Verne’s giant cannon—the launch must occur at the latitude and longitude on Earth where the Moon will be directly overhead at the time the ship arrives at the Moon. In practice, the converted ships also carried auxiliary chemical rockets and reaction control thrusters for trajectory corrections and precision maneuvering which could not be accomplished with the anti-gravity drive.
By 2064, the lunar settlement, called Aristillus by its inhabitants, was thriving, with more than a hundred thousand residents, and growing at almost twenty percent a year. (Well, nobody knew for sure, because from the start the outlook shared by the settlers was aligned with Mike Martin’s anarcho-capitalist worldview. There was no government, no taxes, no ID cards, no business licenses, no regulations, no zoning [except covenants imposed by property owners on those who sub-leased property from them], no central bank, no paper money [an entrepreneur had found a vein of gold left by the ancient impactor and gone into business providing hard currency], no elections, no politicians, no forms to fill out, no police, and no army.) Some of these “features” of life on grey, regimented Earth were provided by private firms, while many of the others were found to be unnecessary altogether.
The community prospered as it grew. Like many frontier settlements, labour was in chronic short supply, and even augmented by robot rovers and machines (free of the yoke of BuSuR), there was work for anybody who wanted it and job offers awaiting new arrivals. A fleet of privately operated ships maintained a clandestine trade with Earth, bringing goods which couldn’t yet be produced on the Moon, atmosphere, water from the oceans (in converted tanker ships), and new immigrants who had sold their Earthly goods and quit the slave planet. Waves of immigrants from blood-soaked Nigeria and chaotic China established their own communities and neighbourhoods in the ever-growing network of tunnels beneath Aristillus.
The Moon has not just become a refuge for humans. When BuSuR put its boot on the neck of technology, it ordered the shutdown of a project to genetically “uplift” dogs to human intelligence and beyond, creating “Dogs” (the capital letter denoting the uplift) and all existing Dogs to be euthanised. Many were, but John (we never learn his last name), a former U.S. Special Forces operator, manages to rescue a colony of Dogs from one of the labs before the killers arrive and escape with them to Aristillus, where they have set up the Den and engage in their own priorities, including role-playing games, software development, and trading on the betting markets. Also rescued by John was Gamma, the first Artificial General Intelligence to be created, whose intelligence is above the human level but not (yet, anyway) intelligence runaway singularity-level transcendent. Gamma has established itself in its own facility in Sinus Lunicus on the other side of Mare Imbrium, and has little contact with the human or Dog settlers.
Inevitably, liberty produces prosperity, and prosperity eventually causes slavers to regard the free with envious eyes, and slowly and surely draw their plans against them.
This is the story of the first interplanetary conflict, and a rousing tale of liberty versus tyranny, frontier innovation against collectivised incompetence, and principles (there is even the intervention of a Vatican diplomat) confronting brutal expedience. There are delicious side-stories about the creation of fake news, scheming politicians, would-be politicians in a libertarian paradise, open source technology, treachery, redemption, and heroism. How do three distinct species: human, Dog, and AI work together without a top-down structure or subordinating one to another? Can the lunar colony protect itself without becoming what its settlers left Earth to escape?
Woven into the story is a look at how a libertarian society works (and sometimes doesn’t work) in practice. Aristillus is in no sense a utopia: it has plenty of rough edges and things to criticise. But people there are free, and they prefer it to the prison planet they escaped.
This is a wonderful, sprawling, action-packed story with interesting characters, complicated conflicts, and realistic treatment of what a small colony faces when confronted by a hostile planet of nine billion slaves. Think of this as Heinlein’s The Moon is a Harsh Mistress done better. There are generous tips of the hat to Heinlein and other science fiction in the book, but this is a very different story with an entirely different outcome, and truer to the principles of individualism and liberty. I devoured these books and give them my highest recommendation. The Powers of the Earth won the 2018 Prometheus Award for best libertarian science fiction novel.
Corcoran, Travis J. I. The Powers of the Earth. New Hampshire: Morlock Publishing, 2017. ISBN 978-1-9733-1114-0. Corcoran, Travis J. I. Causes of Separation. New Hampshire: Morlock Publishing, 2018. ISBN 978-1-9804-3744-4.
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.
Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.
Such beliefs have been common over much of human history and in a wide variety of very diverse cultures around the globe, but in recent centuries these beliefs have been displaced by the view of mechanistic, reductionist science, which argues that the brain is just a kind of (phenomenally complicated) biological computer and that consciousness can be thought of as an emergent phenomenon which arises when the brain computer’s software becomes sufficiently complex to be able to examine its own operation. From this perspective, consciousness is confined within the brain, cannot affect the outside world or the consciousness of others except by physical interactions initiated by motor neurons, and perceives the world only through sensory neurons. There is no “consciousness field”, and individual consciousness dies when the brain does.
But while this view is more in tune with the scientific outlook which spawned the technological revolution that has transformed the world and continues to accelerate, it has, so far, made essentially zero progress in understanding consciousness. Although we have built electronic computers which can perform mathematical calculations trillions of times faster than the human brain, and are on track to equal the storage capacity of that brain some time in the next decade or so, we still don’t have the slightest idea how to program a computer to be conscious: to be self-aware and act out of a sense of free will (if free will, however defined, actually exists). So, if we adopt a properly scientific and sceptical view, we must conclude that the jury is still out on the question of consciousness. If we don’t understand enough about it to program it into a computer, then we can’t be entirely confident that it is something we could program into a computer, or that it is just some kind of software running on our brain-computer.
It looks like humans are, dare I say, programmed to believe in consciousness as a force not confined to the brain. Many cultures have developed shamanism, religions, philosophies, and practices which presume the existence of the following kinds of what Dean Radin calls Real Magic, and which I quote from my review of his book with that title.
Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.
Starting in the 19th century, a small number of scientists undertook to investigate whether these phenomena could possibly be real, whether they could be demonstrated under controlled conditions, and what mechanism might explain these kinds of links between consciousness and will and the physical world. In 1882 the Society for Psychical Research was founded in London and continues to operate today, publishing three journals. Psychic research, now more commonly called parapsychology, continues to investigate the interaction of consciousness with the outside world through (unspecified) means other than the known senses, usually in laboratory settings where great care is taken to ensure no conventional transfer of information occurs and with elaborate safeguards against fraud, either by experimenters or test subjects. For a recent review of the state of parapsychology research, I recommend Dean Radin’s excellent 2006 book, Entangled Minds.
Parapsychologists such as Radin argue that while phenomena such as telepathy, precognition, and psychokinesis are very weak effects, elusive, and impossible to produce reliably on demand, the statistical evidence for their existence from large numbers of laboratory experiments is overwhelming, with a vanishingly small probability that the observed results are due to chance. Indeed, the measured confidence levels and effect sizes of some categories of parapsychological experiments exceed those of medical clinical trials such as those which resulted in the recommendation of routine aspirin administration to reduce the risk of heart disease in older males.
For more than a quarter of a century, an important centre of parapsychology research was the Princeton Engineering Anomalies Research (PEAR) laboratory, established in 1979 by Princeton University’s Dean of Engineering, Robert G. Jahn. (The lab closed in 2007 with Prof. Jahn’s retirement, and has now been incorporated into the International Consciousness Research Laboratories, which is the publisher of the present book.) An important part of PEAR’s research was with electronic random event generators (REGs) connected to computers in experiments where a subject (or “operator”, in PEAR terminology) would try to influence the generator to produce an excess of one or zero bits. In a large series of experiments [PDF] run over a period of twelve years with multiple operators, it was reported that an influence in the direction of the operator’s intention was seen with a highly significant probability of chance of one in a trillion. The effect size was minuscule, with around one bit in ten thousand flipping in the direction of the operator’s stated goal.
If one operator can produce a tiny effect on the random data, what if many people were acting together, not necessarily with active intention, but with their consciousnesses focused on a single thing, for example at a sporting event, musical concert, or religious ceremony? The miniaturisation of electronics and computers eventually made it possible to build a portable REG and computer which could be taken into the field. This led to the FieldREG experiments in which this portable unit was taken to a variety of places and events to monitor its behaviour. The results were suggestive of an effect, but the data set was far too small to be conclusive.
In 1998, Roger D. Nelson, the author of this book, realised that the rapid development and worldwide deployment of the Internet made it possible to expand the FieldREG concept to a global scale. Random event generators based upon quantum effects (usually shot noise from tunnelling across a back-biased Zener diode or a resistor) had been scaled down to small, inexpensive devices which could be attached to personal computers via an RS-232 serial port. With more and more people gaining access to the Internet (originally mostly via dial-up to commercial Internet Service Providers, then increasingly via persistent broadband connections such as ADSL service over telephone wires or a cable television connection), it might be possible to deploy a network of random event generators at locations all around the world, each of which would constantly collect timestamped data which would be transmitted to a central server, collected there, and made available to researchers for analysis by whatever means they chose to apply.
As Roger Nelson discussed the project with his son Greg (who would go on to be the principal software developer for the project), Greg suggested that what was proposed was essentially an electroencephalogram (EEG) for the hypothetical emerging global mind, an “ElectroGaiaGram” or EGG. Thus was born the “EGG Project” or, as it is now formally called, the Global Consciousness Project. Just as the many probes of an EEG provide a (crude) view into the operation of a single brain, perhaps the wide-flung, always-on network of REGs would pick up evidence of coherence when a large number of the world’s minds were focused on a single event or idea. Once the EGG project was named, terminology followed naturally: the individual hosts running the random event generators would be “eggs” and the central data archiving server the “basket”.
In April 1998, Roger Nelson released the original proposal for the project and shortly thereafter Greg Nelson began development of the egg and basket software. I became involved in the project in mid-summer 1998 and contributed code to the egg and basket software, principally to allow it to be portable to other variants of Unix systems (it was originally developed on Linux) and machines with different byte order than the Intel processors on which it ran, and also to reduce the resource requirements on the egg host, making it easier to run on a non-dedicated machine. I also contributed programs for the basket server to assemble daily data summaries from the raw data collected by the basket and to produce a real-time network status report. Evolved versions of these programs remain in use today, more than two decades later. On August 2nd, 1998, I began to run the second egg in the network, originally on a Sun workstation running Solaris; this was the first non-Linux, non-Intel, big-endian egg host in the network. A few days later, I brought up the fourth egg, running on a Sun server in the Hall of the Servers one floor below the second egg; this used a different kind of REG, but was otherwise identical. Both of these eggs have been in continuous operation from 1998 to the present (albeit with brief outages due to power failures, machine crashes, and other assorted disasters over the years), and have migrated from machine to machine over time. The second egg is now connected to Raspberry Pi running Linux, while the fourth is now hosted on a Dell Intel-based server also running Linux, which was the first egg host to run on a 64-bit machine in native mode.
Here is precisely how the network measures deviation from the expectation for genuinely random data. The egg hosts all run a Network Time Protocol (NTP) client to provide accurate synchronisation with Internet time server hosts which are ultimately synchronised to atomic clocks or GPS. At the start of every second a total of 200 bits are read from the random event generator. Since all the existing generators provide eight bits of random data transmitted as bytes on a 9600 baud serial port, this involves waiting until the start of the second, reading 25 bytes from the serial port (first flushing any potentially buffered data), then breaking the eight bits out of each byte of data. A precision timing loop guarantees that the sampling starts at the beginning of the second-long interval to the accuracy of the computer’s clock.
This process produces 200 random bits. These bits, one or zero, are summed to produce a “sample” which counts the number of one bits for that second. This sample is stored in a buffer on the egg host, along with a timestamp (in Unix time() format), which indicates when it was taken.
Buffers of completed samples are archived in files on the egg host’s file system. Periodically, the basket host will contact the egg host over the Internet and request any samples collected after the last packet it received from the egg host. The egg will then transmit any newer buffers it has filled to the basket. All communications are performed over the stateless UDP Internet protocol, and the design of the basket request and egg reply protocol is robust against loss of packets or packets being received out of order.
(This data transfer protocol may seem odd, but recall that the network was designed more than twenty years ago when many people, especially those outside large universities and companies, had dial-up Internet access. The architecture would allow a dial-up egg to collect data continuously and then, when it happened to be connected to the Internet, respond to a poll from the basket and transmit its accumulated data during the time it was connected. It also makes the network immune to random outages in Internet connectivity. Over two decades of operation, we have had exactly zero problems with Internet outages causing loss of data.)
When a buffer from an egg host is received by the basket, it is stored in a database directory for that egg. The buffer contains a time stamp identifying the second at which each sample within it was collected. All times are stored in Universal Time (UTC), so no correction for time zones or summer and winter time is required.
This is the entire collection process of the network. The basket host, which was originally located at Princeton University and now is on a server at global-mind.org, only stores buffers in the database. Buffers, once stored, are never modified by any other program. Bad data, usually long strings of zeroes or ones produced when a hardware random event generator fails electrically, are identified by a “sanity check” program and then manually added to a “rotten egg” database which causes these sequences to be ignored by analysis programs. The random event generators are very simple and rarely fail, so this is a very unusual circumstance.
The raw database format is difficult for analysis programs to process, so every day an automated program (which I wrote) is run which reads the basket database, extracts every sample collected for the previous 24 hour period (or any desired 24 hour window in the history of the project), and creates a day summary file with a record for every second in the day with a column for the samples from each egg which reported that day. Missing data (eggs which did not report for that second) is indicated by a blank in that column. The data are encoded in CSV format which is easy to load into a spreadsheet or read with a program. Because some eggs may not report immediately due to Internet outages or other problems, the summary data report is re-generated two days later to capture late-arriving data. You can request custom data reports for your own analysis from the Custom Data Request page. If you are interested in doing your own exploratory analysis of the Global Consciousness Project data set, you may find my EGGSHELL C++ libraries useful.
The analysis performed by the Project proceeds from these summary files as follows.
First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).
Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.
It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer’s formula:
over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance.
Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds.
The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama’s 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.
What we see here is precisely what you’d expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you’d expect)?
Well, not exactly. Now let’s look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.
Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111.
What’s going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I’ll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I’ve shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year’s Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes).
The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here’s what you get.
Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.
Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.
The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.
Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let’s look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.
Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.
Dean Radin performed an independent analysis of the day’s data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.
The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.
So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don’t know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you’ll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”.
If you’re interested in exploring further, Roger Nelson’s book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There’s also quite a bit of New Age stuff which makes my engineer’s eyes glaze over, but it doesn’t detract from the rigorous information elsewhere.
The ultimate resource is the Global Consciousness Project’s sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs.
Someone with the avatar name, “So I was bangen’ this Supreme Court Clerk from BKK, Thailand when all of a sudden…” was on the site. Clearly this someone was wanting to spam the site therefore I marked the account as such. I will also delete the group this person made.
Let’s do a little journalism. A member proudly pointed to an article in a newspaper which cited his work. While this is undoubtedly a good thing, the newspaper, its article, the claims made therein, the nature of the article itself, and the journalistic integrity of the author are without merit.
We will look in turn at the paper, the author of the article, the article itself, the claims made therein, and then see where the heck that leaves us.
The Sunday Guardian is not connected to The Guardian
First off, the article is published in “The Sunday Guardian”of India, which has nothing to do with the more familiar UK-based “The Guardian” which, despite its willingness to lie for communism, is a paragon of objective standards compared to the at-best blog style of The Sunday Guardian (TSG). Wikipedia:
The Sunday Guardian is an independent Sunday newspaper, founded by journalist M. J. Akbar, and currently owned by iTV Network. It was launched on 31 January 2010 from New Delhi and is printed in New Delhi, Mumbai and Chandigarh. The 40-page newspaper is divided into two sections of 20 pages each: The Sunday Guardian and Guardian 20. Together, they offer a mix of news, investigation, opinion, entertainment, lifestyle and issues of human interest.
TSG takes some care to be seen, at first glance, to be a weekly imprint of The Guardian. Here are the online logos of the two papers (I can’t believe I’m defending Grauniad):
While the typefaces are not identical, they’re pretty close, and importantly, the TSG typeface is not at all similar (besides being a serif face) to the typeface used by the print edition of TSG:
The news article is not a news article.
Despite being published in the news section, “https://www.sundayguardianlive.com/news/insights-secret-scientific-research-us”, this is not news. For one, there is no event which is reported upon. Actual news journalism is first and foremost the reporting of a current event. At best, this TSG piece is a magazine article. It would have to be a “fan” or proponent article based on the uncritical, supportive, cheerleading tone, and the last paragraph amounts to a call to action (it is specifically a call for “clarity”), which pops the whole thing right out of the journalistic realm of writing. To place it properly in a newpaper, it woudl be in the opinion section or editorials (if the editor wishes to assume the stance taken in the piece), although only the last paragraph makes it explicitly a stance piece. While much of the rest of the article is written in a tone which mere;y fails to be objective, the final paragraph is explicitly a position.
The closest that the article gets to reporting a recent event is in paragraph eleven of fourteen, which begins:
In January of this year a Freedom Of Information Act request from Steven Aftergood, director of the Federation of American Scientists against Government Secrecy, led to the release by the DIA of the list of titles of above mentioned 38 government-funded research reports gathered by AATIP as part of the process to “read in” on a need-to-know basis, officials in the military and civilian administrations.
No links, citations, or documents are offered to support even this recent fact– in short — there’s no news.
Other articles by the author, also filed as news.
The author of the UFO article is “Come Carpentier de Gourdon”, whose other recent articles for TSG as “news” include:
a review of a book on yoga an international relations (which reads like a press release):
“Four appendices conclude the volume and the second consists of a very useful anthology of yoga texts through which the reader may pursue his quest and rise in his understanding of this metaphorical ocean of wisdom.”
a review of a book which “exposes negative stereotypes used to besmirch Hindu tradition”
An opinion piece titled “India’s future and Hindutva: Some thoughts on Mohan Bhagwat’s speeches about RSS”
STG does have an Opinion section, which de Gourdon also populates, but the articles above were all in “News”.
The author himself and his strong interest in the claims made in the article.
My task here is not to educate you on de Gourdon as a man in full, so I leave much out. My task is to demonstrate that he has no objective stance from which to report upon the supposed nexus of the US government and space aliens. Here is his own description of a his book “A SHINING CITY ON A HILL”
This book is a biovel, a novelized biographic story which narrates the author’s visit to Colorado USA in the nineteen eighties and his investigations of the forces that shape, move and control American society. Arriving from India where he had spent eight years, the writer uncovered little known or hidden facets of America’s history and political system. He evokes the Eastern influences that have played a role, from the nation’s genesis within the British Empire to the vedantic inspiration of the New England Transcendentalists, from the theosophical and occultist connections of free masonic sects to the ‘Indic’ references in the writings of Herman Melville and other novelists. Those connections help explain the fascination of some leading scientists, statesmen and military commanders for Hindu and Buddhism mysticism and metaphysics, often related to the spreading use of psychotropic drugs and the rise of the New Age Movement, but the Conservative Puritanical reaction and the ruthless power of a mighty Deep State bore the seeds of events which shook America in this century, from the 911 terrorist attacks to the ensuing invasions and continuing wars in the Middle East and the election of Donald Trump. The book reveals that America’s long standing high level interest in esotericism came of age when some of its ruling elites came into contact with what they could only regard as the Supernatural. That is perhaps the greatest secret that has been kept until today and it may account in part for the current state of the USA.
So he is already a true believer, a member in good standing of the UFO cult. This matters because of the utter lack of support given to the claims made in his article on “Insights about secret scientific research in the US”. The insights are his. The claims are his.
Writing a supposed memoir or reflection as a novel is a dodge, like electing Al Franken to the Senate. You can always say “Just kidding!” This author can be placed under no reasonable demand to back up what he says about the US, US history, current affairs, or anything else in the book, because it’s a novel. But he reserves the right to wink and nod while declaring that it’s all true, just he can’t say that for fear of being snuffed out by the UFO powers that be. The “ha-ha only serious” approach to writing leaves his narrative of his investigation into [whatever] looking like fiction in drag. Unconvincing, undesirable, and at any rate not going home with us.
Come Carpentier de Gourdon is a prolific writer, but none of it is journalism, which is unfortunate, because our Ratburger member has drawn attention with some pride to the fact that his own work is cited in the opinion piece by de Gourdon.
We will use that citation as our entry point to the article itself.
Ratburger member cited in UFO cultist’s opinion piece.
“Physicist Jack Sarfatti, formerly at San Diego State University, has gone on record to say he is doing research on the propulsion system of the “tictac” by studying “alien” recovered metamaterials in the custody of Dr Puthoff’s Earthtech. The existence of those materials of non-earthly origin has been officially confirmed.”
We may break this down into two declarative statements, one demonstrably true, and the other unsupported.
Claim 1) Physicist has gone on record. Quite true. Although I seem to recall that Sarfatti said he had not seen the stuff itself, but relied upon second-hand reports from a person who claimed to have seen the stuff, but no longer had it in his custody. If so, this is a shameful twisting of Dr. Sarfatti’s words, a cheap writer’s trick to turn rumors into “facts”.
Claim 2) Existence has been confirmed. Full of weasels. Note that this sentence can be true without any of the implied claims being proven. The existence of something has been confirmed — not the nature of that thing.
We may fairly ask if there is a difference between officially confirmed, and merely confirmed. What is the official nature of that confirmation? You would think that official confirmation would come with a link, or at least cite a source.
As well we ask if the confirmation extends to claims made about the nature of the materials. I’ll re-state, fairly I believe, the implication of these two sentences being taken together:
At the company EarthTech, Dr. Puthoff is in possession of exotic “meta-“materials.
These materials were recovered from at least one extraterrestrial alien craft.
Dr. Jack Sarfatti is studying those materials in order to learn about the propulsion system of such craft.
All of this is confirmed by officials.
Is that a fair re-statement? Then why are the claims not made in that fashion? Because it is difficult to play word games with simple declarative sentences. That difficulty comes more clear when you take these four sentences as separate claims:
1) Is Puthoff in possession of these materials? The answer is going to be NO.
2) How do we know the materials were recovered from alien craft? There will be no evidence presented to support this.
3) Has Dr. Sarfatti seen these materials ever? Does he have the stuff now? Has he performed any experiments upon the materials? Will any samples be made available for possible replication of results? No, No, No, and No.
“Back in 2007, Senator Harry Reid of Nevada, in which Area 51 is located, and who then chaired the Senate Select Committee on Intelligence, set up a new study group with the support of fellow Senators, Inouye and Stevens, under the name of AATIP (Advanced Aerospace Threat Identification Program) at the suggestion of his friend, billionaire Robert Bigelow, chairman of Bigelow Aerospace, a contractor to NASA which conducted research on UFOs and collected substantial evidence of the extraterrestrial presence.
Senator Reid wished to gather information on the secret work being carried out outside the purview of Congressional authorities and got an appropriation of $22 million for a five-year budget. The investigations were entrusted to Bigelow’s aerospace research division and coordinated by Earthtech of Austin, Texas, an R&D centre in frontier areas of science headed by Dr Harold Puthoff, formerly at Stanford Research Institute. AATIP under the stewardship of high-ranking intelligence officer Luis Elizondo, commissioned a still unissued 490-page report and collected 38 classified papers from a number of universities and research centres reflecting some of the goals pursued at the behest of the DIA (as Defense Intelligence Research Documents or DIRD) and other military intelligence bodies.
AATIP remained unknown to the public until both the New York Times and the Washington Post on 16 December 2017 published articles about it with the mandatory sceptical rumblings. They both, however, provided online links to a film taken in 2004 by Super Hornet jet pilots from the USS Nimitz, off the coast of Southern California, of a fleet of extremely fast flying objects, exhibiting performances far beyond the abilities of the most advanced aircraft, whose shapes suggested “tictacs” which became their moniker.
Physicist Jack Sarfatti, formerly at San Diego State University, has gone on record to say he is doing research on the propulsion system of the “tictac” by studying “alien” recovered metamaterials in the custody of Dr Puthoff’s Earthtech. The existence of those materials of non-earthly origin has been officially confirmed.“
As an inquisitive child, I remember asking my grandparents about their lives – what it was like when they were young, particularly before they emigrated from Ukraine/Poland to the US. All were Jews who fled ever-present danger; unlike rules for game animals, you see, it was always ‘open season’ on Jews back then (is it my imagination, or is that happening again?). My paternal grandmother, Lara, came here at a very young age with no memories of the old country. What she did have – and did not reveal until very near the end of her life – was the knowledge that her seven older brothers all had been murdered by Cossacks around the turn of the 20th century. As history unfolded, this could be classified as merely a warm-up for Babi Yar and who knows how many other unrecorded similar atrocities..
My paternal grandfather, Abraham (né Avram) told me how, as a child, he used to help his father deliver grain in burlap sacks to Kiev on a horse-drawn cart. Part of the payment they received for their farm produce consisted of the emptied burlap sacks in which grain had been delivered – from which his mother made clothing. I, from the comfort of America in the 1950’s, remember thinking how different my grandfather’s childhood world was from the one he presently inhabited (a nice apartment in Newark, New Jersey) as he told me this story. I remember imagining that he must have had to make remarkable adjustments to life which had changed so radically (even though much for the better in most ways). This insight into the course of my grandfather’s life was unusual for me, given what I now realize about my young self. It turned out to be a harbinger of the “adjustments” that were in store for me in the course of my own life…
I don’t know if it was peculiar to my particular psychic make-up or a distinguishing characteristic of my generation (I was born in 1944), but, looking back, I think I must have been jaded from birth. What I mean is that, having spent my formative years in the shadow of mushroom clouds (we regularly did nuclear blast “duck and cover” exercises in public grammar school) – so to speak – I felt immune to any sense of novelty, awe, or wonder. From my perspective (although I never could have articulated it back then), the technological advances and social reverses which regularly occurred did so simply as a matter of course, as if it were simply to be expected. My childhood attitude bordered on one of blasé entitlement, and this was markedly discordant with what I just described thinking about my grandfather. It doesn’t make sense and yet my lack of awe or wonder persisted through most of my adult life – until recently. I spent much of my life longing to be transported spiritually by some irresistible, wondrous, awe-inspiring force (say, like God revealing his existence to me), while remaining ever emotionally unmoved – as almost an outside observer in my own everyday life.
As I write, I am tending the excellent wood stove in my family room. Whenever the outside temperature is below 25 F or if it is cold, damp and rainy, I like to have a fire. It may be surprising to hear me say that this is one of the most satisfying and reassuring activities I have ever done. I warms me, profoundly, and more than in the thermodynamic sense; the acts of starting and maintaining the fire and feeling its warmth are deeply reassuring and connect me, in an abstract, yet palpable way, to my ancestors. I have a vivid early childhood memory of my maternal grandmother, Blanche, saying “I have to make heat,” and taking me down to the basement of the second-story walk-up apartment in which she lived with my grandfather. There, she carefully shoveled coal into the furnace from a small ready pile kept near the furnace door. Not a piece was wasted; even the dust was swept onto the shovel and fed to the fire.
From this memory, I can easily generate abstract images of my earlier and forever unknown ancestors – at various times and places over thousands of years – sitting near a fire to warm themselves in what must have been brief respites from hard, uncomfortable, uncertain lives. Inescapable is the realization that had a single member of the lines of humans who were my ancestors not survived to procreate, I would not exist. There is awe for me, today, in that thought. Ancillary to it is the realization that our present ability to record ourselves in durable media may forever change how we see ourselves in the stream of humanity. Our lineal descendants will be able to see and hear us, their progenitors, on HD videos going back scores, hundreds, even thousands of years. Those sufficiently interested may well suffer from ancestor overload. I realized this when I came across an old photo of my grandfather Abe who, at about age 20, remarkably resembled my 3 year-old son.
Nowadays, as my physical wellbeing declines (no identified terminal illness, yet…) in ways I can no longer deny – as I approach my end – I find nostalgia, awe and a great sense of mystery in many things I used to take for granted. The technological progress I previously considered merely due, ordinary or mundane, I have come to see as near-miraculous. The advances my grandfather witnessed – from horse cart to jet airliner – pale compared to mine, from vacuum tube to printed circuit. And from horse-cart to printed circuits (each of whose count of transistor gates keeps increasing) or, for that matter, from the invention of the wheel to artificial intelligence – has happened in mere seconds, as measured in ticks of the big sidereal clock in the sky. This realization alone, this hint of the possibility of a glimpse of the immanence of God in the mind of humanity, outweighs a lifetime of blasé shrugs.
Nowadays, one inescapable mystery of life strikes most every time I think back on the course of my own youth. It is often a lancinating psychic pain: how have I gone from then ’til now so incredibly quickly? It feels like it was only yesterday that I was a promising, innocent young boy with much to anticipate. How I long to go back and whisper some of life’s present wisdom in that scared little boy’s ear. But I am already an old man who developed few of his talents – and even those not much to my satisfaction – with nothing left to look forward to; all life’s milestones, so exciting in the anticipation, are past but one. Where has my life gone…? Where have those lively, innocent, hopeful faces of my childhood companions gone? Many are already dead and this somehow just doesn’t compute. I shrink from the thought. I look at the cast bios while watching old movies on TCM. Those magnificent men, those beautiful women, so vibrant, so full of life…they are all dead and gone, every one. My life now often consists of merely running out the clock with some lingering vague hope for finding meaning, recognition, affirmation or love (of a more abiding kind than the lust I once confused with love). Does the fact that I have lived make any difference, I ask myself as I count down my life one 90-day prescription refill (really seven bottles of them simultaneously every three months) at a time? Will I outlive the next refill or will it survive me? When my light goes out, all existence – as far as I am concerned – will cease. That, too, is a mystery – one I find presently painful, awe-inspiring, incomprehensible.
Several moments of nostalgia recently rose to near-physical pain. Tiny excerpts from the tale of my life. Something led me to look on Google Earth at a sleep-away camp I went to for eight weeks for each of the summers of 1953, ’54, and ’55 – age 8 – 10. It was a scary experience to go from north NJ to NYC, then by train to Great Barrington MA, to Monterey by bus. It was an all-day trip, whose separation anxiety and motion sickness led me to vomit all over the bus floor even as we arrived at Camp Monterey for boys and sister Camp Owaissa for girls.. The stench of this episode lingered and did not improve my popularity. Such gustatory ejaculations were emblematic, it seems, of my childhood fears. The first day of kindergarten, my mother walked me the half-mile to school and left me with my class. The separation anxiety was so intense I vomited there, all over the bright, shiny yellow enamel table I shared with other children seated around it. Typical of my upbringing, I only recall being given a wash and clean clothes, but not the love and reassurance I needed to assuage the fear of being separated from my mother.
But I digress, the point I wanted to make is that, notwithstanding the anxiety of getting to camp and staying there in real time, the memories of having been there leave me with truly heart-rending nostalgia and awe. On Google Maps, there remains not the slightest trace of the rather extensive physical manifestations of the camp. Not the bungalow in which we slept, the dining hall, stables, baseball fields, shooting range or the pine grove, the site of bonfires and marshmallow immolations. Neither were there the docks on the lake where I overcame many fears and learned to swim and water ski. The memory of water skiing, in turn, led me to recall my next-door neighbor, Larry, from Elizabeth NJ. Although a couple of years older, he was my best friend through most of my childhood. He was the water ski instructor at the camp (we first learned of it from him) and I had lost contact with him when I left for college. As was my style, sadly I know now, my friends were then disposable. I rarely maintained contact with any in a given school or locale after either of us physically moved on. I remember my dad told me he had heard from Larry about 30 years ago and that Larry said he would be glad to hear from me. Alas, even then, I was busy with life and never bothered to reach out. I have searched in vain for him on the web recently. How I would love to recall with him the times at Camp Monterey and the endless stickball games we played in our neighborhood! I can only see this trait in myself as a defect of character which is self-punishing. The longing and nostalgia engendered from this self-inflicted loss indeed constitute a form of ‘just-so’ retribution.
A similar longing took hold of me was I watched the movie My Fair Lady recently. My mother was an amateur ‘Borscht Belt’ performer along the lines of Ethel Merman. She had some talent, a powerful voice and a dramatic persona. She played leading roles in numerous local amateur and semi-pro musical productions, especially Gypsy. Anyway, my parents had a collection of 33rpm renditions of all the popular Broadway musicals, including My Fair Lady. I even went with friends to see a few of these productions live. Damn Yankees, seen at age 12, introduced me to the more-than-real, ‘larger than life’ effect produced by such artful shows. A rare moment of that elusive awe I longed for. I later took the great unrequited love of my childhood, Karen, to see Camelot with Richard Burton, Julie Andrews and Robert Goulet – the original Broadway cast (It did not make her love me). Again, a brief moment of inspiration, whose impact was lost on me since the entire enterprise was in service of somehow causing Karen to ‘love’ me; a manipulation, in short, which blunted the effect that experience might otherwise have had.
As with nostalgia for Camp Monterey and longing to recapture the magic of life which eluded me in real-time, I had the same longing while I was watching My Fair Lady. I think it is a mix of many emotional threads. These musicals display a completely different culture from the present. The sheer innocence and socially docile, amenable humanity of the time seems quaint, almost child-like compared to today. The simple decency in those musical dramas and the nobility of even the fallen characters, spoke of an untainted human condition – flawed yet hopeful – today warped beyond recognition. So I think we have lost something culturally in what is today required of entertainment. More personally, these musicals connect me to whatever small part of my childhood was not fraught with fear of not measuring up to my parents expectations; that gnawing sense that I was somehow responsible for their happiness and finding fulfillment by my performance on the stage of their lives. What a burden! And these lilting refrains provided a temporary balm, easing the chronic aches in the reality of family dysfunction. A glimpse of life and love as it could be. And seeing My Fair Lady today stands, magically, for the proposition that some values are, indeed, timeless – regardless of what our fake culture now insists.
I try to avoid the intense self-conscious moments of existential fear rooted in my childhood as best I can. I have never required the admonition ‘memento mori.’ To the contrary, what I need is a time out from recalling my mortality. The best antidote I have found is keeping busy. That is precisely why I failed at retirement 10 years ago. After a two month trial off of work, I received an offer of part-time anesthesiology practice and I grabbed it. I continue to do that on average about 6 days each month. As well, I am starting a second part-time job as a physician in a drug & alcohol rehab, where I will help detox addicts four weekend days per month (so as to not conflict with my anesthesia work). I do this simply because when I work, I become the task of doing my job and this affords me precious moments of ‘memento vitae’ – remembering life, unencumbered for a time with the intense consciousness of self (self-centeredness in recovery-speak) which is toxic in the large doses which I seem unable to escape when I am not working, reading or engrossed in a good movie (of which there are few made nowadays).
Speaking of addiction and recovery, somebody once told me she thought I was a ‘meanings’ junkie. Maybe that is part of my problem..
For those who are new, one of the first things I wrote on the site was a draft of principles. Here are the principles.
PRINCIPLES OF AGREEMENT
This site is for entertainment. Posts need to add not subtract from the site. Fun is important.
No attacking the person. Attacking words are okay. People get to openly disagree and have vigorous debates over ideas not personalities.
Keep it clean. This site should be family and children friendly. (The web has plenty of places to express non-family thoughts.) This does not mean that we don’t handle in John Walker’s words “gnarly subjects” but we do it without the coarseness of most of the Internet.
Keep it legit. Respect copyright toward images and text. Give attribution when needed.
This is a conservative site. If you can’t respect those principles please find a site more conducive to your ideas.
I don’t want to bore you so I will just focus on Number One.
This site is for entertainment. Posts need to add not subtract from the site. Fun is important.
I want this site to be enjoyable and beneficial. It shouldn’t be work. Anyone who takes part needs to add and not subtract to the site. To understand what I mean please read the following from an earlier post, Disagreement Versus Disruption.
In a pluralist society disagreement is its mother milk. People need to keep true to their beliefs. I am a firm believer in, there is more unity in diversity* than uniformity. It works because hopefully there is enough commonality on the basics that people can live in peace. What a society or community can’t handle is willful disruptions. I would put Antifa in this group.
There seems to be a trend where people want to shut down the discussion rather than have it. Many tactics are used. One is the extreme pejorative. “You’re a racist.” “You hate poor people.” “You have no principles.” The next is to heckle. This person wants to stop a good conversation from happening. They don’t rent the room or gather the people. But they figure they have a right to disrupt the people who do. Third, there are the thought police. One can’t even bring up the subject. Certain words need to be bleeped out. “You are part of the patriarchy. ” “Meritocracy is code for keeping people down.” “We know what you really mean.” “One giant leap for [Redacted].”
Whereas disagreement is how we learn. We challenge and listen and respond respectfully. We might go to the lecture and ask questions but we don’t try to shut it down or picket it. We gather people who are like minded and form groups who agree to disagree. Many of the best movements started this way. They disagreed with the status quo and persuaded people that there was a better way.
Good disagreement does not demonize the other person. It does not heckle or shout down the other argument. It does not limit the discussion to only a certain set of words. It should be a marketplace that ideas can be exchanged and positions can change.
* Don’t confuse this with the current use of the word where diversity is used to be anti-opposing views.
I knew a family with four children. At different times in my life I had a relationship with each of them. Many years ago I asked the youngest to do something for me. He said, “Yes.” Then a short while after he asked me if he could get out of the commitment to go fishing with his dad. I said, “Yes”. Then it happened.
It was a small float plane that crashed shortly after takeoff. The day of the crash a friend called me to say the news bulletin with no names was about our friend and his dad. I told him, “We don’t know so there is no need to worry till we do.” It was a short time after the names came out. It was them.
I still feel guilty for letting my friend out of his commitment. Things probably would not have change but who knows they might have. It would be nice to think that a tragedy could have been avoided by my actions.
When it comes to death, I am of two minds. One mind wants to cherish that life and remember the good I was blessed with. The other mind feels the sting of losing. Depending on the closeness of the person that can be debilitating. That sting can last a long time. Sometimes the two mind mix. What I learned from my experience or better yet I am learning, is the two things are not a right and a wrong. Both can be valid in the right time and place. It was okay for people to cry and feel the sting just as it was okay for people to rejoice that “death is swallowed up in victory”. (It was a religious family.) It was wrong to think each side could understand the other in the moment.
The family was some comforting to the grieving friends at the time of the funeral. I heard later that the shock set in while few people were around.
It does get me that one of the final things the young friend who died said to me was a flippant joke I had told his sister. It would have been nice to be remember for more than a gag. The joke was basically, “I plan to be evil, that is where the money is.”
Fifty years ago today, on March 2nd, 1969, the first prototype Concorde left the ground for the first time in Toulouse, France. Pilot André Turcat and his flight test crew of four put the new airliner through a modest set of maneuvers to test its handling and controllability, leaving the landing gear down through the entire flight (this was often the case for early test flights at the time). After a brief flight of just 28 minutes, cut short due to deteriorating weather conditions, Concorde 001 landed normally. On April 9th, 1969, the British-assembled Concorde 002 made its first flight from Filton, England to RAF Fairford to begin its tests. Both aircraft would participate in an intense test and envelope expansion programme, achieving supersonic speed on October 1st, 1969, with subsequent flights testing higher speeds up to the operational cruise speed of Mach 2.02 (around 2,154 kilometres per hour [the speed of sound depends upon altitude, barometric pressure, and temperature; if a speed is defined by Mach number the air speed will vary]). Here is a short contemporary report on the Concorde’s maiden flight.
The advent of supersonic transports (SSTs) such as the Concorde promised a new era in civil aviation. Just as the first jet transports such as the Boeing 707 and Douglas DC-8 had almost doubled the speed of their piston-engined predecessors, the Concorde was more than twice as fast as existing jet airliners. Concorde would reduce the flight time between London and New York or Washington from more than six hours to a little more than three hours, and would make long-haul flights far more endurable for passengers. While supersonic airliners were expected to be expensive and fuel-thirsty, the increased speed would allow airline companies to conduct twice as many flights per day with each aircraft, which would compensate for the higher cost. The prospects looked bright for Concorde: at the time of the first flight, 74 had been optioned by major airlines around the world, including U.S. carriers Pan Am, United Airlines, Continental Airlines, Braniff, American Airlines, and TWA. Eventually, more than 100 non-binding orders were received.
With no competitor project underway in the West, it appeared that Concorde would have the SST market segment to itself for some time. The European aircraft industry seemed poised to reclaim the technological lead from U.S. airframers.
Concorde was not the first supersonic transport to fly. On the last day of 1968, the first Tupolev Tu-144 took flight from Zhukovsky airport near Moscow, and went on to beat Concorde to the milestones of first supersonic flight and first flight at Mach 2. The result of a crash programme and intense campaign of industrial espionage aimed at the Concorde, the Tu-144 was superficially similar but far more crude in design and execution. It lacked Concorde’s elegantly curved wing which meant its performance at low speeds was poor and it consequently landed substantially faster—it was the only airliner to routinely use a drag parachute when landing. Concorde could “supercruise”—although it used its afterburners when taking off and to accelerate through Mach 1, once supersonic it cruised without afterburner. The Tu-144, however, required the afterburner throughout the supersonic phase of flight, which resulted in terrible fuel economy and reduced its range. The aircraft were very unreliable; in its short operational history of just 102 flights (only 55 with passengers), they experienced more than 226 failures, eighty in flight. The cabin was so noisy that passengers could not easily converse and often had to pass written notes. A total of 16 Tu-144s were produced, two of which crashed, including one highly embarrassing crash at the 1973 Paris Air Show. The last commercial passenger flight was on June 1st, 1978, and the programme was cancelled by the Soviet government in 1983. The existing aircraft were subsequently used as flying laboratories including, in the 1990s, by NASA.
Here is a documentary about the Tu-144.
When the Concorde project was announced in November 1962, pressure grew in the United States to respond in some way. Interestingly, this pressure did not come from the aircraft manufacturers, all of which had done their own in-house studies of the technology and potential markets and concluded the opportunity for a successful product was marginal. U.S. airlines, in particular Pan Am and its vocal CEO Juan Trippe, indicated that they would buy Concorde if no U.S. alternative were available, and the Federal Aviation Administration lobbied for government support of a U.S. SST programme. On June 5th, 1963, U.S. president John Kennedy announced a National Supersonic Transport programme, where cab drivers and hairdressers in the U.S. would be taxed to support the development of a technology which they could not afford to use. This is “progressive” government.
Originally, work focussed on an airliner designed for domestic routes, as it was believed it was too late to catch Concorde in the international market, but before long the project was re-scoped to build a “Concorde killer” which would be bigger (around 250 passengers to Concorde’s 120) and faster (around Mach 3—50% faster than Concorde). This would dramatically complicate the design. Up to around Mach 2 conventional aluminium construction can be used, but the heating at Mach 3 essentially requires titanium skin and external structure, which is much more expensive and difficult to fabricate. Engine and inlet design become more difficult, and trying to provide both the required range and high speed and acceptable takeoff and landing speeds to operate from existing airports was an enormous challenge.
A number of U.S. aircraft manufacturers bellied up to the government money trough, but it quickly came down to a competition between Boeing, who eventually named their entry the Boeing 2707, and Lockheed, who proposed the L-2000. The L-2000 was essentially a super Concorde—bigger, faster, but pretty much the same shape and technology. It was considered the low-risk choice. Boeing’s entry was—something else again. The original 2707 had a “swing wing” like the F-111, which would extend for take-off and landing and fold back to the tail to reconfigure in flight as a delta wing for high speed operation. It was a wide body configuration with seven-abreast seating in economy. And it would fly at Mach 2.7, thanks to its titanium main structure.
On the first day of 1967, Boeing’s design was chosen. Hey, it was the Sixties—go for it! There was only one slight problem with the design: it was absolutely impossible to build. That swing-wing, fabricated out of recalcitrant-to-machine titanium, which had to work at temperatures between −60 and 300° C, was hideously complicated and heavy, weighing more than two tonnes for the pivot assembly alone. In the end, they couldn’t make it work, and in October, 1968 the swing-wing was abandoned in favour of a design reminiscent of Lockheed’s entry in the original competition. All of this led to a multi-year slip in schedule and cost overruns and, in March, 1971, the U.S. congress, over the opposition of the Nixon administration, pulled the plug on the project which, with the loss of taxpayer subsidies, was immediately terminated.
Here is a documentary about the Boeing 2707 débâcle:
By the time Concorde was ready to enter commercial service in the mid-1970s, the economic and political environment had dramatically changed from the time of the first flight. The Soaring Sixties had given way to the Souring Seventies, and the dramatic increase in oil prices in the aftermath of the 1973 OPEC oil embargo (crude oil prices quadrupled between October 1973 and March 1974) made airlines acutely aware of fuel costs and loath to add a plane as thirsty as Concorde to their fleet. Further, the advent of the 747 jumbo jet had created a mass market for air travel with low ticket prices, and the market was increasingly driven by price, not speed. The non-binding order book for Concorde just evaporated, mostly before the end of 1973, leaving only the British and French flag carriers as customers. Only twenty Concordes were built, with just 14 entering service: seven each for BOAC/British Airways and Air France. These carriers would continue to operate Concorde as a super-premium service (when I flew a British Airways Concorde from Washington to London in January 1991 the ticket price was 40% higher than first class on a 747) until 2003 when both airlines retired the type. Concorde suffered only one crash in its operational history, Air France Flight 4590 in July 2000, but the aging aircraft were becoming increasingly difficult and expensive to maintain, and Airbus, who had taken over maintenance for the fleet, announced the end of maintenance support. Here is a cockpit view of the takeoff and landing of a Concorde flight from London to New York.
This is a view from the passenger cabin of one of the last Concorde flights in 2003.
Here is a BBC documentary about Concorde.
As we’re now only ten months from the start of the Roaring Twenties, we might ask whether civil supersonic flight was a fantasy from a lost age of optimism where we all assumed we’d be going to the Moon on holiday or perhaps just before its time? It has long been assumed that fuel cost, the inability to fly supersonic over land due to prohibition of sonic booms, airport noise restrictions, and environmental considerations (supersonic airliners fly higher than subsonic jets, where their emissions can contribute to ozone depletion) made a successor to Concorde infeasible. However, slow progress has been made on all of these fronts and there are interesting things going on which may bear fruit in the coming decade.
One of the major problems with supersonic flight is sonic boom. An object (whether an aircraft, rifle bullet, or super-hero) travelling faster than sound creates shock waves which, if they reach the ground, produce a loud double boom like thunder but sharper and higher pitched which, in extreme cases, can rattle items on shelves or break window glass. Here in Switzerland, the boys and their F-18 toys sometimes go supersonic and we experience the phenomenon. You’ll notice it when it happens. This is rare, but just imagine a hundred supersonic airliners overflying your location every day—no. A NASA project, being built by Lockheed Martin, called the X-59A QueSST (Quiet Supersonic Transport), will explore “low boom” technology. By carefully shaping the airframe, the idea is that the usual boom can be shaped into a “thump”, reducing the noise from the bone-shaking 109 dB of Concorde to a mild 75 dB when operating at Mach 1.42 at an altitude of 16.8 km. If the design works as intended, the plan is to conduct overflight tests of communities in the U.S. to measure popular perception of the noise level. First flight is currently planned for late 2021 or early 2022. Because of the extreme pointed nose on the plane (needed to shape the boom), the pilot has no direct view ahead. There will be a virtual reality system to provide a synthetic view from cameras mounted on the forward fuselage.
This is a short film about the X-59A.
Independent of low-boom design, there is the phenomenon of “Mach cutoff”. As I mentioned above, the speed of sound depends upon air pressure and temperature and, in many cases, this creates a situation where there is an altitude above which the sonic boom created by an aircraft is reflected before reaching the ground. This is very similar to the way submarines exploit thermal gradients in the ocean to hide from sonar detection by adversaries. We’ve now gotten good enough in monitoring atmospheric conditions, both based upon on-board instrumentation and uplink of data from meteorological instruments, that an aircraft can predict the Mach cutoff ahead of it and, when it’s sufficiently high and strong, fly supersonic over land. In essence, the trick is that the plane is flying faster than sound at its altitude but slower than the speed of sound in the atmosphere near the surface so the boom never gets there. It’s estimated that many cross-country flights in the U.S. could operate at Mach 1.2 above the Mach cutoff and reduce travel time by 50% with no sonic boom on the ground. This would require regulatory approval, but since it would create no additional noise, the only reason to withhold it is inertia and Green Luddite instincts.
Founded in 2014, Boom Technology is developing a supersonic airliner called the Overture with a goal of flight at Mach 2.2 for 55 passengers with a range of 8300 km, scheduled for introduction in the middle of the 2020s. Current work is focussed on the “XB-1 Baby Boom”, a one-third scale flying demonstrator intended to prove the technologies. It is expected to enter flight test later this year. The company has raised US$ 151 million in venture capital so far, including US$ 10 million from Japan Airlines. Unlike some other supersonic ventures, they have not compromised on speed: the joke at the company is that their Wi-Fi password is “mach2.2ordie”. They do not depend on low-boom or Mach cutoff—they claim the “business case closes” purely for supersonic over-water flight.
Here is a short video about Boom Technologies.
Aerion Technologies has been working on the concept of a supersonic business jet for years. The current concept, the Aerion AS2, is a 12 passenger business jet which will cruise at Mach 1.4 over water and exploit the Mach cutoff or Mach 0.95 cruise over land to cut an hour off a typical coast-to-coast flight in the U.S. It is designed to meet all airport noise standards for new aircraft. The current development schedule aims at first customer deliveries in 2026. With three engines, it will be able to make long over-water flights without the costly and fussy ETOPS certification of airliners which many business jet operators are not willing to obtain. In early February 2019, Boeing announced it had made a “significant investment” in Aerion and concluded an agreement to “provide engineering, manufacturing and flight test resources, as well as strategic vertical content, to bring Aerion’s AS2 supersonic business jet to market. ”
This is a stylish but not very informative video from Aerion about their plans for the product. I’m not so sure about some of the claims for end-to-end time reductions given the need to refuel due to limited range.
Spike Aerospace is developing the Spike S-512, an 18 passenger business jet designed with low boom technology and intended to operate, pending regulatory approval, at Mach 1.6 over land. It is intended for first flight in 2025.
Their promotion seems to be all hype and style, with few details. Here you go.
Back to the Future?
Will airline passengers fly faster than sound in the future, as I did in 1991? Dunno. We’ve been getting dumber, and hence less able to create and maintain advanced technologies.
Still, I am hopeful. We’ve gotten a lot better at computer modelling transonic and supersonic fluid flows, which means we can design craft with less costly wind tunnel or flight testing. We’re richer than we were in the 1960s, and understand that there’s a trade-off between time and money. Maybe before the end of the Roaring Twenties they’ll be saying, “Mach 2—that’s for grandpa. Let’s go for Mach 3!”. Or, perhaps, they’ll be digging for grubs with blunt sticks among the wreckage of wind turbines and solar farms. I’d put either at about equal probability.
Peking duck (北京烤鸭) is a classic mainstay of Chinese cuisine. It is often a special treat on the menu of Chinese restaurants, requiring diners to order in advance for serving to multiple people. There’s a reason for this: it’s a major production to prepare and serve. The classic recipe takes three days: the first to remove the neck bones and knot the neck, paint the skin with honey and soy sauce, and hang to dry; the second to blow up the skin like a balloon to separate from the meat then blanch in boiling water; and the third to roast the whole duck in a wood-fired oven. As I recall, I’ve only had properly prepared Peking Duck once in my life, when a bunch of programmers at the place I worked in the 1970s arranged a Chinese banquet at a restaurant in Berkeley, California, but long before and after that I’ve made this recipe or variants, which I find excellent, if not authentic, and a tiny fraction of the work. You can look at this as a special treat, but making it couldn’t be easier.
For those of you not living in the beautiful Old Dominion, Virginia, the Mother of Presidents, the whirlwind of activity might be a little confusing. For those of us from here, it’s been a hoot watching the Democrat party go full-on ouroboros.
First some Virginia jargon: we don’t refer to our “state”, but rather to our “Commonwealth”. We know it makes us sound highfalutin, but you wouldn’t have a nation without Washington, Jefferson, Madison, Monroe, Patrick Henry, etc so we’re entitled. Our legislature is called the General Assembly, and it’s bicameral being comprised of the House of Delegates and the Senate of Virginia. The order of succession is Governor, Lt Governor, Attorney General, Speaker of the House of Delegates, whoever the House of Delegates appoints.
Let’s go through a timeline of the last week, but first a quick detour. A little over a year ago, I wrote a fewposts here about a hotly contested House of Delegates election. That one race determined who was in charge in the House, and it was decided for the Republican by drawing a name out a bowl. Because of that, Kirk Cox (R), is Speaker of the House of Delegates. Keep this tucked away in your head because we’ll come back to it.
Last week, Kathy Tran put forward a bill in committee that would have allowed abortion up til the moment of birth. In the following days Governor Ralph Northam defended it, going so far as to describe scenarios where the child would actually be delivered, and then it would be decided whether to kill the baby or not.
Sickened by Northam’s pro-infanticide comments, one of his former classmates notified news outlets of Northam’s medical school yearbook which contained Northam’s personal page which included a photo of a man in blackface standing with a man in KKK garb. Considering all of the other photos on the page are of Northam, it is presumably him as one of the two in the photo. His VMI yearbook surfaced that evening listing his nickname as “Coonman”. It was then recalled that he had twice refused to shake EW Jackson’s hand (his black GOP opponent) during a televised debate six years ago when Northam ran for Lt Gov. He also excluded Justin Fairfax (his black Lt Gov running mate) from a campaign flier while including Mark Herring (his white Atty Gen running mate) at the request of a union sponsoring the flier.
Friday evening, Northam apologized for having been in the photo. Saturday he decided that he wasn’t in the photo. Inexplicably, his defense was not that he had never been in blackface. Rather, his defense was when he had been in blackface, it was at a different party. He claims he had dressed up as Michael Jackson (which if he had waited a few years wouldn’t have required blackface, ironically). He was asked by a reporter if he could moonwalk and nearly did so until his wife stepped in and prevented him from becoming the meme of the century.
As people begin to talk resignation, Lt Gov Justin Fairfax’ name begins to be discussed. Seeing him get national attention, a woman accuses him of sexual assault. It turns out this isn’t a new allegation, but it was buried before because he’s a black Democrat and only white Republicans get articles on them in the Washington Post based on accusations. Fairfax at first denies it, then the next day he changes his story and says it was consensual. Then he goes full paranoid and accuses Northam’s camp of leaking the story to prevent him from succeeding Northam. He backs off this claim and instead blames the mayor of Richmond who is seen as a rival to Fairfax for the 2021 Governor’s race. He has not yet blamed Trump or Russians.
With both top spots in jeopardy, Attorney General Mark Herring calls on Northam to resign…and then comes out today and admits he, too, dressed up as a black man while in college. Now, when I hear “blackface” I think specifically of minstrel show caricatures with literal black faces and painted on white/red lips. However, since the Left is calling the shots now, I’ll be generous and say Herring also wore blackface even though technically he dressed as a rapper and wore brown makeup, not the stereotypical blackface garb. Ironically, Shaun King (aka Talcum X) is mortified at all of the guys in blackface because they’ve put more effort into it than he’s put into the last few years pretending to be a black man without bothering with makeup.
Should the Governor, Lt Governor, and Atty General all resign, that would leave Speaker Cox as Governor. Personally, I think Northam may be stubborn enough to not resign, and there appears to be little that could be done to make him since he’s done nothing illegal or impeachable in office. If anything, I could see Herring resigning, Northam appointing a Democrat replacement, and then resigning to ensure a Democrat remains Governor. I don’t know how that would fly with the GOP controlling both Houses of the General Assembly. We’d probably be looking at even more chaos and politicking. But it’s conceivable that an election decided by drawing a name from a bowl coupled with Democrat hubris in pushing for unrestricted abortion and infanticide result in the Democrat leadership being wiped out and a Republican being installed as Governor. God moves in mysterious ways.
During the Great Depression, the Empire State Building was built, from the beginning of foundation excavation to official opening, in 410 days (less than 14 months). After the destruction of the World Trade Center in New York on September 11, 2001, design and construction of its replacement, the new One World Trade Center was completed on November 3, 2014, 4801 days (160 months) later.
In the 1960s, from U.S. president Kennedy’s proposal of a manned lunar mission to the landing of Apollo 11 on the Moon, 2978 days (almost 100 months) elapsed. In January, 2004, U.S. president Bush announced the “Vision for Space Exploration”, aimed at a human return to the lunar surface by 2020. After a comical series of studies, revisions, cancellations, de-scopings, redesigns, schedule slips, and cost overruns, its successor now plans to launch a lunar flyby mission (not even a lunar orbit like Apollo 8) in June 2022, 224 months later. A lunar landing is planned for no sooner than 2028, almost 300 months after the “vision”, and almost nobody believes that date (the landing craft design has not yet begun, and there is no funding for it in the budget).
Wherever you look: junk science, universities corrupted with bogus “studies” departments, politicians peddling discredited nostrums a moment’s critical thinking reveals to be folly, an economy built upon an ever-increasing tower of debt that nobody really believes is ever going to be paid off, and the dearth of major, genuine innovations (as opposed to incremental refinement of existing technologies, as has driven the computing, communications, and information technology industries) in every field: science, technology, public policy, and the arts, it often seems like the world is getting dumber. What if it really is?
That is the thesis explored by this insightful book, which is packed with enough “hate facts” to detonate the head of any bien pensant academic or politician. I define a “hate fact” as something which is indisputably true, well-documented by evidence in the literature, which has not been contradicted, but the citation of which is considered “hateful” and can unleash outrage mobs upon anyone so foolish as to utter the fact in public and be a career-limiting move for those employed in Social Justice Warrior-converged organisations. (An example of a hate fact, unrelated to the topic of this book, is the FBI violent crime statistics broken down by the race of the criminal and victim. Nobody disputes the accuracy of this information or the methodology by which it is collected, but woe betide anyone so foolish as to cite the data or draw the obvious conclusions from it.)
In April 2004 I made my own foray into the question of declining intelligence in “Global IQ: 1950–2050” in which I combined estimates of the mean IQ of countries with census data and forecasts of population growth to estimate global mean IQ for a century starting at 1950. Assuming the mean IQ of countries remains constant (which is optimistic, since part of the population growth in high IQ countries with low fertility rates is due to migration from countries with lower IQ), I found that global mean IQ, which was 91.64 for a population of 2.55 billion in 1950, declined to 89.20 for the 6.07 billion alive in 2000, and was expected to fall to 86.32 for the 9.06 billion population forecast for 2050. This is mostly due to the explosive population growth forecast for Sub-Saharan Africa, where many of the populations with low IQ reside.
This is a particularly dismaying prospect, because there is no evidence for sustained consensual self-government in nations with a mean IQ less than 90.
But while I was examining global trends assuming national IQ remains constant, in the present book the authors explore the provocative question of whether the population of today’s developed nations is becoming dumber due to the inexorable action of natural selection on whatever genes determine intelligence. The argument is relatively simple, but based upon a number of pillars, each of which is a “hate fact”, although non-controversial among those who study these matters in detail.
There is a factor, “general intelligence” or g, which measures the ability to solve a wide variety of mental problems, and this factor, measured by IQ tests, is largely stable across an individual’s life.
Intelligence, as measured by IQ tests, is, like height, in part heritable. The heritability of IQ is estimated at around 80%, which means that 80% of children’s IQ can be estimated from that of their parents, and 20% is due to other factors.
IQ correlates positively with factors contributing to success in society. The correlation with performance in education is 0.7, with highest educational level completed 0.5, and with salary 0.3.
In Europe, between 1400 and around 1850, the wealthier half of the population had more children who survived to adulthood than the poorer half.
Because IQ correlates with social success, that portion of the population which was more intelligent produced more offspring.
Just as in selective breeding of animals by selecting those with a desired trait for mating, this resulted in a population whose average IQ increased (slowly) from generation to generation over this half-millennium.
The gradually rising IQ of the population resulted in a growing standard of living as knowledge and inventions accumulated due to the efforts of those with greater intelligence over time. In particular, even a relatively small increase in the mean IQ of a population makes an enormous difference in the tiny fraction of people with “genius level” IQ who are responsible for many of the significant breakthroughs in all forms of human intellectual endeavour. If we consider an IQ of 145 as genius level, in a population of a million with a mean IQ of 100, one in 741 people will have an IQ of 145 or above, so there will be around 1350 people with such an IQ. But if the population’s mean IQ is 95, just five points lower, only one in 2331 people will have a genius level IQ, and there will be just 429 potential geniuses in the population of a million. In a population of a million with a mean IQ of 90, there will be just 123 potential geniuses.
(Some technical details are in order. A high IQ [generally 125 or above] appears to be a necessary condition for genius-level achievement, but it is insufficient by itself. Those who produce feats of genius usually combine high intelligence with persistence, ambition, often a single-minded focus on a task, and usually require an environment which allows them to acquire the knowledge and intellectual tools required to apply their talent. But since a high IQ is a requirement, the mean IQ determines what fraction of the population are potential geniuses; other factors such as the society’s educational institutions, resources such as libraries, and wealth which allows some people to concentrate on intellectual endeavours instead of manual labour, contribute to how many actual works of genius will be produced. The mean IQ of most Western industrial nations is around 100, and the standard deviation of IQ is normalised to be 15. Using this information you can perform calculations such as those in the previous paragraph using Fourmilab’s z Score Calculator, as explained in my Introduction to Probability and Statistics.)
Of the pillars of the argument listed above, items 1 through 3 are noncontroversial except by those who deny the existence of general intelligence entirely or the ability of IQ tests to measure it. The authors present the large body of highly persuasive evidence in favour of those items in a form accessible to the non-specialist. If you reject that evidence, then you needn’t consider the rest of the argument.
Item 4, the assertion that wealthier families had more children survive to adulthood, is substantiated by a variety of research, much of it done in England, where recorded wills and church records of baptisms and deaths provide centuries of demographic data. One study, for example, examining wills filed between 1585 and 1638 in Suffolk and Essex found that the richer half of estates (determined by the bequests in the wills) had almost twice as many children named in wills compared to the poorer half. An investigation of records in Norfolk covering the years 1500 to 1630 found an average of four children for middle class families as opposed to two for the lower class. Another, covering Saxony in Germany between 1547 and 1671, found the middle class had an average of 3.4 children who survived to become married, while the working class had just 1.6. This differential fertility seems, in conjunction with item 5, the known correlation between intelligence and social success, to make plausible that a process of selection for intelligence was going on, and probably had been for centuries. (Records are sparse before the 17th century, so detailed research for that period is difficult.)
Another form of selection got underway as the middle ages gave way to the early modern period around the year 1500 in Europe. While in medieval times criminals were rarely executed due to opposition by the Church, by the early modern era almost all felonies received the death penalty. This had the effect of “culling the herd” of its most violent members who, being predominantly young, male, and of low intelligence, would often be removed from the breeding population before fathering any children. To the extent that the propensity to violent crime is heritable (which seems plausible, as almost all human characteristics are heritable to one degree or another), this would have “domesticated” the European human population and contributed to the well-documented dramatic drop in the murder rate in this period. It would have also selected out those of low intelligence, who are prone to violent crime. Further, in England, there was a provision called “Benefit of Clergy” where those who could demonstrate literacy could escape the hangman. This was another selection for intelligence.
If intelligence was gradually increasing in Europe from the middle ages through the time of the Industrial Revolution, can we find evidence of this in history? Obviously, we don’t have IQ tests from that period, but there are other suggestive indications. Intelligent people have lower time preference: they are willing to defer immediate gratification for a reward in the future. The rate of interest on borrowed money is a measure of a society’s overall time preference. Data covering the period from 1150 through 1950 found that interest rates had declined over the entire time, from over 10% in the year 1200 to around 5% in the 1800s. This is consistent with an increase in intelligence.
Literacy correlates with intelligence, and records from marriage registers and court documents show continually growing literacy from 1580 through 1920. In the latter part of this period, the introduction of government schools contributed to much of the increase, but in early years it may reflect growing intelligence.
A population with growing intelligence should produce more geniuses who make contributions which are recorded in history. In a 2005 study, American physicist Jonathan Huebner compiled a list of 8,583 significant events in the history of science and technology from the Stone Age through 2004. He found that, after adjusting for the total population of the time, the rate of innovation per capita had quadrupled between 1450 and 1870. Independently, Charles Murray’s 2003 book Human Accomplishment found that the rate of innovation and the appearance of the figures who created them increased from the Middle Ages through the 1870s.
The authors contend that a growing population with increasing mean intelligence eventually reached a critical mass which led to the industrial revolution, due to a sufficiently large number of genius intellects alive at the same time and an intelligent workforce who could perform the jobs needed to build and operate the new machines. This created unprecedented prosperity and dramatically increased the standard of living throughout the society.
And then an interesting thing happened. It’s called the “demographic transition”, and it’s been observed in country after country as it develops from a rural, agrarian economy to an urban, industrial society. Pre-industrial societies are characterised by a high birth rate, a high rate of infant and childhood mortality, and a stable or very slowly growing population. Families have many children in the hope of having a few survive to adulthood to care for them in old age and pass on their parents’ genes. It is in this phase that the intense selection pressure obtains: the better-off and presumably more intelligent parents will have more children survive to adulthood.
Once industrialisation begins, it is usually accompanied by public health measures, better sanitation, improved access to medical care, and the introduction of innovations such as vaccination, antiseptics, and surgery with anæsthesia. This results in a dramatic fall in the mortality rate for the young, larger families, and an immediate bulge in the population. As social welfare benefits are extended to reach the poor through benefits from employers, charity, or government services, this occurs more broadly across social classes, reducing the disparity in family sizes among the rich and poor.
Eventually, parents begin to see the advantage of smaller families now that they can be confident their offspring have a high probability of surviving to adulthood. This is particularly the case for the better-off, as they realise their progeny will gain an advantage by splitting their inheritance fewer ways and in receiving the better education a family can afford for fewer children. This results in a decline in the birth rate, which eventually reaches the replacement rate (or below), where it comes into line with the death rate.
But what does this do to the selection for intelligence from which humans have been benefitting for centuries? It ends it, and eventually puts it into reverse. In country after country, the better educated and well-off (both correlates of intelligence) have fewer children than the less intelligent. This is easy to understand: in the prime child-bearing years they tend to be occupied with their education and starting a career. They marry later, have children (if at all) at an older age, and due to the female biological clock, have fewer kids even if they desire more. They also use contraception to plan their families and tend to defer having children until the “right time”, which sometimes never comes.
Meanwhile, the less intelligent, who in the modern welfare state are often clients on the public dole, who have less impulse control, high time preference, and when they use contraception often do so improperly resulting in unplanned pregnancies, have more children. They start earlier, don’t bother with getting married (as the stigma of single motherhood has largely been eliminated), and rely upon the state to feed, house, educate, and eventually imprison their progeny. This sad reality was hilariously mocked in the introduction to the 2006 film Idiocracy.
While this makes for a funny movie, if the population is really getting dumber, it will have profound implications for the future. There will not just be a falling general level of intelligence but far fewer of the genius-level intellects who drive innovation in science, the arts, and the economy. Further, societies which reach the point where this decline sets in well before others that have industrialised more recently will find themselves at a competitive disadvantage across the board. (U.S. and Europe, I’m talking about China, Korea, and [to a lesser extent] Japan.)
If you’ve followed the intelligence issue, about now you probably have steam coming out your ears waiting to ask, “But what about the Flynn effect?” IQ tests are usually “normed” to preserve the same mean and standard deviation (100 and 15 in the U.S. and Britain) over the years. James Flynn discovered that, in fact, measured by standardised tests which were not re-normed, measured IQ had rapidly increased in the 20th century in many countries around the world. The increases were sometimes breathtaking: on the standardised Raven’s Progressive Matrices test (a nonverbal test considered to have little cultural bias), the scores of British schoolchildren increased by 14 IQ points—almost a full standard deviation—between 1942 and 2008. In the U.S., IQ scores seemed to be rising by around three points per decade, which would imply that people a hundred years ago were two standard deviations more stupid that those today, at the threshold of retardation. The slightest grasp of history (which, sadly many people today lack) will show how absurd such a supposition is.
What’s going on, then? The authors join James Flynn in concluding that what we’re seeing is an increase in the population’s proficiency in taking IQ tests, not an actual increase in general intelligence (g). Over time, children are exposed to more and more standardised tests and tasks which require the skills tested by IQ tests and, if practice doesn’t make perfect, it makes better, and with more exposure to media of all kinds, skills of memorisation, manipulation of symbols, and spatial perception will increase. These are correlates of g which IQ tests measure, but what we’re seeing may be specific skills which do not correlate with g itself. If this be the case, then eventually we should see the overall decline in general intelligence overtake the Flynn effect and result in a downturn in IQ scores. And this is precisely what appears to be happening.
Norway, Sweden, and Finland have almost universal male military service and give conscripts a standardised IQ test when they report for training. This provides a large database, starting in 1950, of men in these countries, updated yearly. What is seen is an increase in IQ as expected from the Flynn effect from the start of the records in 1950 through 1997, when the scores topped out and began to decline. In Norway, the decline since 1997 was 0.38 points per decade, while in Denmark it was 2.7 points per decade. Similar declines have been seen in Britain, France, the Netherlands, and Australia. (Note that this decline may be due to causes other than decreasing intelligence of the original population. Immigration from lower-IQ countries will also contribute to decreases in the mean score of the cohorts tested. But the consequences for countries with falling IQ may be the same regardless of the cause.)
There are other correlates of general intelligence which have little of the cultural bias of which some accuse IQ tests. They are largely based upon the assumption that g is something akin to the CPU clock speed of a computer: the ability of the brain to perform basic tasks. These include simple reaction time (how quickly can you push a button, for example, when a light comes on), the ability to discriminate among similar colours, the use of uncommon words, and the ability to repeat a sequence of digits in reverse order. All of these measures (albeit often from very sparse data sets) are consistent with increasing general intelligence in Europe up to some time in the 19th century and a decline ever since.
If this is true, what does it mean for our civilisation? The authors contend that there is an inevitable cycle in the rise and fall of civilisations which has been seen many times in history. A society starts out with a low standard of living, high birth and death rates, and strong selection for intelligence. This increases the mean general intelligence of the population and, much faster, the fraction of genius level intellects. These contribute to a growth in the standard of living in the society, better conditions for the poor, and eventually a degree of prosperity which reduces the infant and childhood death rate. Eventually, the birth rate falls, starting with the more intelligent and better off portion of the population. The birth rate falls to or below replacement, with a higher fraction of births now from less intelligent parents. Mean IQ and the fraction of geniuses falls, the society falls into stagnation and decline, and usually ends up being conquered or supplanted by a younger civilisation still on the rising part of the intelligence curve. They argue that this pattern can be seen in the histories of Rome, Islamic civilisation, and classical China.
And for the West—are we doomed to idiocracy? Well, there may be some possible escapes or technological fixes. We may discover the collection of genes responsible for the hereditary transmission of intelligence and develop interventions to select for them in the population. (Think this crosses the “ick factor”? What parent would look askance at a pill which gave their child an IQ boost of 15 points? What government wouldn’t make these pills available to all their citizens purely on the basis of international competitiveness?) We may send some tiny fraction of our population to Mars, space habitats, or other challenging environments where they will be re-subjected to intense selection for intelligence and breed a successor society (doubtless very different from our own) which will start again at the beginning of the eternal cycle. We may have a religious revival (they happen when you least expect them), which puts an end to the cult of pessimism, decline, and death and restores belief in large families and, with it, the selection for intelligence. (Some may look at Joseph Smith as a prototype of this, but so far the impact of his religion has been on the margins outside areas where believers congregate.) Perhaps some of our increasingly sparse population of geniuses will figure out artificial general intelligence and our mind children will slip the surly bonds of biology and its tedious eternal return to stupidity. We might embrace the decline but vow to preserve everything we’ve learned as a bequest to our successors: stored in multiple locations in ways the next Enlightenment centuries hence can build upon, just as scholars in the Renaissance rediscovered the works of the ancient Greeks and Romans.
Or, maybe we won’t. In which case, “Winter has come and it’s only going to get colder. Wrap up warm.”
Dutton, Edward and Michael A. Woodley of Menie. At Our Wits’ End. Exeter, UK: Imprint Academic, 2018. ISBN 978-1-84540-985-2.
Here is a James Delingpole interview of the authors and discussion of the book.