Do Something !

This has become a familiar problem in politics. At the moment that any news item comes along, media hacks want to be able to report on what reactions and consequences will result from the item. Politicians become immediately anxious to influence any outcomes in a direction favorable to them. Pundits have to have something clever or ponderous to say about everything. And, if it is international, all eyes are on the President to see how he will respond.

No; I am not about to talk about President Trump. I am thinking about President George Herbert Walker Bush. The memorial chatter in observance of his death has got me irked. He is rightfully being remembered as the great statesman who presided over the collapse of the Soviet Union. But he was called “gracious” and “statesman” and “reserved” in ways to deliberately contrast with President Trump, in hopes of making President Trump look bad by comparison. That was a very different time with very different circumstances. Current motives for lauding President G.H.W. Bush are transparent.

Now there is a spate of “he was actually horrible” reaction pieces. Here is an example of the c**p I mean:

Especially compared with current occupant of the Oval Office, George H. W. Bush was a dignified figure who served his country steadfastly in war and peace. He represented a center-right, internationalist strain of Republicanism that barely exists today. But it doesn’t make sense to canonize him.

Steadfast

I remember the G.H.W. Bush Administration days. I recall all the histrionics over the open discontent coming from behind the Iron Curtain, which was building because Mikhail Gorbachev was holding steady on his course of “Glasnost,” which was translated as “Openness.” I also recall mass media giving voice to lots of chatterers who were urging President Bush to “do something!” These were counterposed with chatterers expressing high anxiety about things going badly wrong if he did the wrong something. There was a huge debate raging over just what America should do to take advantage of the situation.

President G.H.W.B. was the right man for this circumstance. He was a cold warrior, well-acquainted with all the players, including China. He was well known by most world leaders. Nobody thought he would act rashly, and he was circumspect. In this case, by “circumspect” I do not mean to say that he was risk-averse, but, rather that he exhibited a pattern of careful and well-informed decisionmaking: “a careful consideration of all circumstances and a desire to avoid mistakes and bad consequences.”

There was a great storm of confusion and loud voices urging all sorts of action, and all sorts of fearmongering about what America might do to exploit the situation. President G.H.W.B. started calling heads of state, beginning with Gorbachev and proceeding all the way down the roster. This was something he had been doing all through 1989, since the unrest in the Eastern Bloc presaged the unrest in Russia. I recall some Important People predicting that, just as Luis XIV’s reforms let the pressure off just enough for the French cauldron to boil over in 1789, so Russia would explode in a massive bloodletting, and that the unrest would be a great opportunity for America to exploit.

Bush was calling to reassure everyone that America would not act rashly nor aggressively, and, if assistance was wanted, would help the Russian people to back away from generations of Communist rule, and that he looked forward to embracing his Russian friends as free partners on the world stage. The central message was that President G.H.W. Bush intended to do nothing, and allow the Russians and their client Soviet partners readjust their internal affairs without American meddling. This had been his consistent message to Gorbachev all through 1989.

You are probably familiar with several aphorisms to the effect of, ‘when things are going in a good direction, don’t get in the way.’ But that is really hard do; to refrain from acting when there is a daily clamor for you to act.

President Bush was faulted for inaction, called a “dumb lucky bystander,” trashed daily in the press. He was even called “a wimp;” which is a stunning description of a man who earned the Distinguished Flying Cross while piloting 58 torpedo bomber missions from the deck of an aircraft carrier.

Media shenanigans

You have to remember that this was back in the days of Leftist mass media hegemony. There were only the three alphabet networks, Public Broadcasting, and a brand-new little-known phenomenon, a cable channel dedicated to full time news broadcasting. CNN was new and was just one of 100 cable channels competing for attention in the relatively new world of cable. The only conservative publications were Commentary and National Review, both with miniscule circulation then as now. The editorial page of the Wall Street Journal was the only widespread source of conservative thought in America. The New York Times and the Washington Post dominated the national conversation, much more back then than now.

There was little in the way of talk radio. Rush Limbaugh had started in 1988 with 56 stations, the year after the repeal of the Fairness Doctrine, and was barely on over 100 stations at the time. (Otherwise, talk radio was mostly local, interviewing local commissioners and municipal department heads, or discussing health issues with a local doctor, or national shows that talked about music and Hollywood celebrities.) The repeal of the Fairness Doctrine allowed the major media organizations to quit maintaining a balance of “liberal” (Leftist) commentary and conservative commentary.

So media was a Leftist project, but most Americans did not recognize just how far left it had become. This allowed President Bush to be slandered daily with little in the way of countervailing defense. There were still a hundred or so conservative daily papers in those days, but they were overwhelmed by the flood of Leftist ink and Leftist broadcasting.

President G.H.W. Bush had his defenders, including the most stalwart Bob Dole. But on the national scene, he was holding steady, reassuring the world most evenings by telephone to encourage everyone to simply let the Soviet system collapse without meddling, and not to worry about all the fearmongering from the press. When the Berlin Wall fell, there was a new round of fearmongering about American meddling, which kept G.H.W.B busy soothing political anxieties around the globe in early-early morning or very late-night phone calls.

By the time of the 1992 campaign, the Soviet Union had collapsed, with total casualties less than a hundred, not millions. Boris Yeltsin had been leading the new Russian Federation for a year, and the whole subject was considered “old news” as far as American mass media was concerned.

Steadfast

Saddam Hussein miscalculated badly. He mistook American inaction during a clear moment of opportunity to be an indication of American weakness and of President Bush’s personal weakness. He invaded Kuwait, which he had wanted for a very very long time. His minions treated Kuwaitis badly. News of atrocities, and refugees, slipped out of Kuwait.

The ruling family of Kuwait had an important personal friend in George H.W. Bush; they had had warm acquaintances for many years. He told Saddam Hussein to leave Kuwait or else. Then, to back up his threat, he requested that the Pentagon get to work in earnest on war plans.

But the situation was complicated by the fact that there was no Soviet counterbalance to American power, and the Europeans were going nuts about American cowboys swaggering around the world and breaking things. There was all sorts of Congressional carping about how G.H.W. Bush would lead us into a disaster. So, President Bush decided to act as the leader of a group, and then patiently pulled together a coalition. Several (seemingly) important international members decided to play coy, and so President Bush agreed not to invade Iraq, but instead to go only so far as was needed to liberate Kuwait.

He kept his promise. Even though it was clear to everyone that what would be best would be to move on in to Baghdad, President Bush kept his promise.

Steadfast

There were some really interesting economic changes in the 1980s. The one we best remember is the Reagan tax cut. But there was a stock market crash in 1987, and a slo-mo disaster among savings&loans that began with a high profile bankruptcy in 1985, then progressed through a number of bankruptcies until Charles Keating’s Lincoln Savings went bankrupt in 1989. The deregulation of savings&loans under Carter ended with new regulations in 1990. That was accompanied in a budget deal in which the Democrats had forced President Bush to accept a deal that modestly raised taxes, famously breaking his “no new taxes” pledge from the 1988 campaign. The American economy stalled into a mild recession in 1990.

President G.H.W. Bush huddled with his economic team, and decided that the fundamentals of the American economy were sound and that things were sorting out smoothly. He decided that the best approach was to do nothing and let the power of American enterprise work things out.

Of course, mass media was full of chattering about how awful the Bush economy was and how out of touch Bush was because he was spending all his time palling around with his international friends.

The campaign began in earnest in the fall of 1991, with America still technically in recession, but with signs of recovery all around. Democratic candidates all agreed that America needed a huge jobs bill to “put America back to work.” The most robust counterpoint to that was from Ross Perot, who was spending his own millions to put the budget deficit and the national debt into the national conversation.

The campaign of 1992 was really ugly if you were paying attention. Pat Buchanan ran a strong primary challenge in which he decried the national debt, trying to leverage some of Ross Perot’s work.

Bill Clinton emerged soon as the favorite Democrat. He had southern charm, a boyish grin, and spoke about being a “New Democrat.” His wife was a career lawyer lady popular among the Planned Parenthood wing. He could carry all those Southern conservative Democrats along with all the Leftist coastal Democrats and the rust belt union states. The pundit class agreed that he had what it would take to unseat an incumbent.

What nobody except Rush Limbaugh was talking about was that mass media was working as an extension of the Democrat campaign.

Media talking heads started saying that Bush was so focused on international events that he did not care about domestic affairs. Their spin was that his energetic and careful restraint on the international front caused him to neglect domestic issues. The recession was blamed on Bush, and the actual causes were ignored. Democrats raised the hue and cry, and mass media amplified it.

They also reinforced it through dishonest reporting on the economy. They reported every bit of economic news, maintaining a careful accounting. But that is not how Americans learn news. Bad economic news was reported, and good economic news was reported. Then the bad news was repeated, while the good news was shelved. Bad news got talked about, and good news did not get talked about. Reporters asked questions at news conferences about bad news, but not about good news. Chattering shows dwelt on bad news and ignored good news. Editorials focused on bad news and not good news. If much of the American economy is dependent on “consumer confidence,” then the whole economy resisted recovery because consumer confidence was killed by constant media focus on bad economic news.

James Carville famously observed that Clinton’s main message was “it’s the economy, stupid.” This sound bite leveraged the mass media narrative in a way that was condescending and arrogant, which was what made Carville such a good hatchet man.

At every opportunity, at the Convention and all through the fall campaigning, G.H.W. Bush kept saying that all the indicators were that the economy had bottomed out in the early spring of 1992, and that the American economy was robust, things were building up, and that the best thing to do about the economy was to do nothing.

He was ridiculed. He was mocked and and scoffed. He was called “out of touch.” He was called an out-of-touch elitist who never did his own grocery shopping. In an effort to address that, he went grocery shopping, which turned out disastrous when it became clear that he had never seen a checkout scanner in use. He was widely mocked for that, although grocery scanners had only come into widespread use in the past five years. The optics were bad.

And he was too genteel to call out the reporters who rode Air Force One for their poor and unfair journalism. They continued to carry bad economic news to boost Bill Clinton.

And in the third ring of this circus H. Ross Perot stole enough votes away to throw the election.

Clinton Economy ?

George H. W. Bush lost in 1992 and Bill Clinton became president. He had his massive jobs program introduced and passed in the House. It was spiked by Bob Dole in the Senate. Dole killed it so dead that it never was mentioned after that.

Mostly it was forgotten because it was not needed. Other economy-boosting measures introduced by Democrats also died. What happened was that the Fed kept interest rates low, and that was all that was needed for the American economy to recover. It was more than a recovery. It was a booming economy.

So, what Bill Clinton actually did for the economy was to do nothing, because Bob Dole prevented him from doing the stupid stuff he had promised while campaigning. He even won reelection in 1996 on the basis of his wonderful economic performance.

Bill Clinton and the Democrat-Media Complex are still taking bows for the wonderful economy of the 1990s. Nobody ever observes that it was G.H.W. Bush’s (and Bob Dole’s and Ronald Reagan’s) economy and economic policy that initiated it and provided room for American ingenuity to flourish.

Economists’ Assessment

In the July reports on the first half of 1993 a report came out that said that the economy was great, all indicators were up, and things looked really rosy.

What went unreported was a little paragraph in which it was noted that the bottom of the recession had been reached in March of the previous year.

What G.H.W.B. had been saying about the economy was exactly true. But Americans were never told that.


Users who have liked this post:

  • avatar
  • avatar
  • avatar

Christmas on December 25th

Hey, gang, we are going to celebrate the Festival of the Birth of Jesus on December 25th this year. We are going to join with all Western Christians and all the saints who have gone before us for the past 1900 years and more. Now, probably on a facebook page near you, sometime this Advent season you will see someone telling you how the Christians selected the date of December 25th by appropriating the date of a Pagan festival. That is a crock, and an anti-Christian slander, and this article is to explain why.

    Most of you plain don’t care whether Christians appropriated a Pagan date. This is the typical reaction from Christians. We don’t really think that there is anything special about the date, it is just the traditional time for an annual celebration of the Nativity miracle. And, since we believe that mankind is corrupted by sin, and because we are all aware that church leaders have let us down on many occasions, we do not find this tale to be particularly troubling, and it sounds believable. So, Christians are generally not disconcerted by this tale, and we generally accept it without question.

Unfortunately, this is the sort of deference on the part of Christians that allows anti-Christian falsehoods to proliferate. Many Christians, such as G.K. Chesterton, accepted this tale as true. Even the Catholic Encyclopedia entry for Christmas (which was written in 1908) mentions this theory with the remark that it is “plausible.” Lots and lots of Christians have simply accepted this anti-Christian falsehood, mostly because it is considered an unimportant detail.

There is much to say regarding this anti-Christian slander, so I will provide some long-winded information and some links for anyone who is interested, or who is cornered by someone who finds this particular assault on the traditional Christmas story to be troubling.

Appropriation theory

Anti-Christians have said that the date of December 25th was deliberately picked to coincide with a Roman Pagan celebration. There are several versions, but here are the two most popular ones: one says that it co-opted a solstice celebration, just getting the date off by a couple of days, and the other says it was to co-opt a festival for Sol Invictus (the Unconquered Sun god). Both versions are falsehoods that keep going around on the internet.

First, neither the Greeks nor the Romans had a solstice festival before Sol Invictus. Sometimes I have seen anti-Christians on the internet raise the fact that other Pagans definitely did, but that does not hold up. There is no evidence that early Christians were in the business of co-opting dates or practices from the surrounding Greek Pagan culture (they opposed it in many ways), and, even if they were, they certainly would not have gone about picking dates from some far-away Pagan culture.

The second version also fails, on the basis that the Sol Invictus festival was initiated long after the Christians had agreed that December 25th is the most likely date for the Nativity. The Christians arrived at the December 25th date by completely independent reasoning that had nothing to do with any December events.

Mea culpa

The Sol Invictus theory was a speculation by a 12th-century writer, and it was accepted by Christians and non-Christians alike as possible and plausible; in those days it was extremely difficult to access the sort of historical records that would have shed light on this theory. This theory was reported later as fact by a Protestant who was using it as a smear against the Roman Catholic Church. It was spread by anti-Catholic Protestants. It has been picked up and used since the Enlightenment by anti-Christians of all sorts, and it gets spread today on the worldwide web by many who seek to undermine the teachings and traditions of orthodox Christians, both Catholic and Protestant.

Early Christian thinking

The Christians of the second century discussed the likely dates for several events in the life of Jesus, in the absence of precise dating in the Gospels. The matter that got the most discussion was the time of the Crucifixion, which was important for dating the Easter festival that commemorates the Resurrection. They were looking to establish the most appropriate date for this important feast, and were employing a Jewish tradition that held that prophets died on the same date that they were either born or conceived.

The short version of the reasoning is: that before John the Baptist was born, when his father Zechariah received his vision, he was serving in the Temple. From Luke chapter 1:

Now while [Zechariah] was serving as priest before God when his division was on duty, according to the custom of the priesthood, he was chosen by lot to enter the temple of the Lord and burn incense. 10 And the whole multitude of the people were praying outside at the hour of incense. 11 And there appeared to him an angel of the Lord standing on the right side of the altar of incense. 12 And Zechariah was troubled when he saw him, and fear fell upon him. 13 But the angel said to him, “Do not be afraid, Zechariah, for your prayer has been heard, and your wife Elizabeth will bear you a son, and you shall call his name John. 14 And you will have joy and gladness, and many will rejoice at his birth, 15 for he will be great before the Lord. …

18 And Zechariah said to the angel, “How shall I know this? For I am an old man, and my wife is advanced in years.” 19 And the angel answered him, “I am Gabriel. I stand in the presence of God, and I was sent to speak to you and to bring you this good news. 20 And behold, you will be silent and unable to speak until the day that these things take place, because you did not believe my words, which will be fulfilled in their time.” 21  And the people were waiting for Zechariah, and they were wondering at his delay in the temple. 22  And when he came out, he was unable to speak to them, and they realized that he had seen a vision in the temple. And he kept making signs to them and remained mute. 23 And when his time of service was ended, he went to his home.

The early Christians reasoned that if Zechariah could not be looked in on, then he must have been in the Most Holy Place, behind the veil, and so the event must have occurred during the annual festival of the Day of Atonement, which takes place in September. This was corroborated by a separate line of reasoning that was based on the rotation of the priests, and informed by a comment found in Josephus to backtrack and learn that Zechariah’s division of priests was serving in September.

If Elizabeth conceived John in September, then it would have been March when Mary conceived Jesus:

26 In the sixth month [of Elizabeth’s pregnancy] the angel Gabriel was sent from God to a city of Galilee named Nazareth, 27 to a virgin betrothed to a man whose name was Joseph, of the house of David. And the virgin’s name was Mary. 28 And he came to her and said, “Greetings, O favored one, the Lord is with you!”

They set the Feast of the Annunciation as March 25. Nine months later is December 25. This was established long before the first Feast of Sol Invictus. Clement of Alexandria wrote about it near the year 200 AD, as did Hippolytus of Rome. It appears from their writings that the date had been established prior to their day. Sol Invictus was first decreed by Emperor Aurelian in 274 AD.

Summary

Here is an excerpt from an article by William Tighe:

Thus, December 25th as the date of the Christ’s birth appears to owe nothing whatsoever to pagan influences upon the practice of the Church during or after Constantine’s time. It is wholly unlikely to have been the actual date of Christ’s birth, but it arose entirely from the efforts of early Latin Christians to determine the historical date of Christ’s death.

And the pagan feast which the Emperor Aurelian instituted on that date in the year 274 was not only an effort to use the winter solstice to make a political statement, but also almost certainly an attempt to give a pagan significance to a date already of importance to Roman Christians.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: Apollo 8 Fifty Years Ago

Apollo 8 EarthriseAs the tumultuous year 1968 drew to a close, NASA faced a serious problem with the Apollo project. The Apollo missions had been carefully planned to test the Saturn V booster rocket and spacecraft (Command/Service Module [CSM] and Lunar Module [LM]) in a series of increasingly ambitious missions, first in low Earth orbit (where an immediate return to Earth was possible in case of problems), then in an elliptical Earth orbit which would exercise the on-board guidance and navigation systems, followed by lunar orbit, and finally proceeding to the first manned lunar landing. The Saturn V had been tested in two unmanned “A” missions: Apollo 4 in November 1967 and Apollo 6 in April 1968. Apollo 5 was a “B” mission, launched on a smaller Saturn 1B booster in January 1968, to test an unmanned early model of the Lunar Module in low Earth orbit, primarily to verify the operation of its engines and separation of the descent and ascent stages. Apollo 7, launched in October 1968 on a Saturn 1B, was the first manned flight of the Command and Service modules and tested them in low Earth orbit for almost 11 days in a “C” mission.

Apollo 8 was planned to be the “D” mission, in which the Saturn V, in its first manned flight, would launch the Command/Service and Lunar modules into low Earth orbit, where the crew, commanded by Gemini veteran James McDivitt, would simulate the maneuvers of a lunar landing mission closer to home. McDivitt’s crew was trained and ready to go in December 1968. Unfortunately, the lunar module wasn’t. The lunar module scheduled for Apollo 8, LM-3, had been delivered to the Kennedy Space Center in June of 1968, but was, to put things mildly, a mess. Testing at the Cape discovered more than a hundred serious defects, and by August it was clear that there was no way LM-3 would be ready for a flight in 1968. In fact, it would probably slip to February or March 1969. This, in turn, would push the planned “E” mission, for which the crew of commander Frank Borman, command module pilot James Lovell, and lunar module pilot William Anders were training, aimed at testing the Command/Service and Lunar modules in an elliptical Earth orbit venturing as far as 7400 km from the planet and originally planned for March 1969, three months later, to June, delaying all subsequent planned missions and placing the goal of landing before the end of 1969 at risk.

But NASA were not just racing the clock—they were also racing the Soviet Union. Unlike Apollo, the Soviet space program was highly secretive and NASA had to go on whatever scraps of information they could glean from Soviet publications, the intelligence community, and independent tracking of Soviet launches and spacecraft in flight. There were, in fact, two Soviet manned lunar programmes running in parallel. The first, internally called the Soyuz 7K-L1 but dubbed “Zond” for public consumption, used a modified version of the Soyuz spacecraft launched on a Proton booster and was intended to carry two cosmonauts on a fly-by mission around the Moon. The craft would fly out to the Moon, use its gravity to swing around the far side, and return to Earth. The Zond lacked the propulsion capability to enter lunar orbit. Still, success would allow the Soviets to claim the milestone of first manned mission to the Moon. In September 1968 Zond 5 successfully followed this mission profile and safely returned a crew cabin containing tortoises, mealworms, flies, and plants to Earth after their loop around the Moon. A U.S. Navy destroyer observed recovery of the re-entry capsule in the Indian Ocean. Clearly, this was preparation for a manned mission which might occur on any lunar launch window.

(The Soviet manned lunar landing project was actually far behind Apollo, and would not launch its N1 booster on that first, disastrous, test flight until February 1969. But NASA did not know this in 1968.) Every slip in the Apollo program increased the probability of its being scooped so close to the finish line by a successful Zond flyby mission.

These were the circumstances in August 1968 when what amounted to a cabal of senior NASA managers including George Low, Chris Kraft, Bob Gilruth, and later joined by Wernher von Braun and chief astronaut Deke Slayton, began working on an alternative. They plotted in secret, beneath the radar and unbeknownst to NASA administrator Jim Webb and his deputy for manned space flight, George Mueller, who were both out of the country, attending an international conference in Vienna. What they were proposing was breathtaking in its ambition and risk. They envisioned taking Frank Borman’s crew, originally scheduled for Apollo 9, and putting them into an accelerated training program to launch on the Saturn V and Apollo spacecraft currently scheduled for Apollo 8. They would launch without a Lunar Module, and hence be unable to land on the Moon or test that spacecraft. The original idea was to perform a Zond-like flyby, but this was quickly revised to include going into orbit around the Moon, just as a landing mission would do. This would allow retiring the risk of many aspects of the full landing mission much earlier in the program than originally scheduled, and would also allow collection of precision data on the lunar gravitational field and high resolution photography of candidate landing sites to aid in planning subsequent missions. The lunar orbital mission would accomplish all the goals of the originally planned “E” mission and more, allowing that mission to be cancelled and therefore not requiring an additional booster and spacecraft.

But could it be done? There were a multitude of requirements, all daunting. Borman’s crew, training toward a launch in early 1969 on an Earth orbit mission, would have to complete training for the first lunar mission in just sixteen weeks. The Saturn V booster, which suffered multiple near-catastrophic engine failures in its second flight on Apollo 6, would have to be cleared for its first manned flight. Software for the on-board guidance computer and for Mission Control would have to be written, tested, debugged, and certified for a lunar mission many months earlier than previously scheduled. A flight plan for the lunar orbital mission would have to be written from scratch and then tested and trained in simulations with Mission Control and the astronauts in the loop. The decision to fly Borman’s crew instead of McDivitt’s was to avoid wasting the extensive training the latter crew had undergone in LM systems and operations by assigning them to a mission without an LM. McDivitt concurred with this choice: while it might be nice to be among the first humans to see the far side of the Moon with his own eyes, for a test pilot the highest responsibility and honour is to command the first flight of a new vehicle (the LM), and he would rather skip the Moon mission and fly later than lose that opportunity. If the plan were approved, Apollo 8 would become the lunar orbit mission and the Earth orbit test of the LM would be re-designated Apollo 9 and fly whenever the LM was ready.

While a successful lunar orbital mission on Apollo 8 would demonstrate many aspects of a full lunar landing mission, it would also involve formidable risks. The Saturn V, making only its third flight, was coming off a very bad outing in Apollo 6 whose failures might have injured the crew, damaged the spacecraft hardware, and precluded a successful mission to the Moon. While fixes for each of these problems had been implemented, they had never been tested in flight, and there was always the possibility of new problems not previously seen.

The Apollo Command and Service modules, which would take them to the Moon, had not yet flown a manned mission and would not until Apollo 7, scheduled for October 1968. Even if Apollo 7 were a complete success (which was considered a prerequisite for proceeding), Apollo 8 would be only the second manned flight of the Apollo spacecraft, and the crew would have to rely upon the functioning of its power generation, propulsion, and life support systems for a mission lasting six days. Unlike an Earth orbit mission, if something goes wrong en route to or returning from the Moon, you can’t just come home immediately. The Service Propulsion System on the Service Module would have to work perfectly when leaving lunar orbit or the crew would be marooned forever or crash on the Moon. It would only have been tested previously in one manned mission and there was no backup (although the single engine did incorporate substantial redundancy in its design).

The spacecraft guidance, navigation, and control system and its Apollo Guidance Computer hardware and software, upon which the crew would have to rely to navigate to and from the Moon, including the critical engine burns to enter and leave lunar orbit while behind the Moon and out of touch with Mission Control, had never been tested beyond Earth orbit.

The mission would go to the Moon without a Lunar Module. If a problem developed en route to the Moon which disabled the Service Module (as would happen to Apollo 13 in April 1970), there would be no LM to serve as a lifeboat and the crew would be doomed.

When the high-ranking conspirators presented their audacious plan to their bosses, the reaction was immediate. Manned spaceflight chief Mueller immediately said, “Can’t do that! That’s craziness!” His boss, administrator James Webb, said “You try to change the entire direction of the program while I’m out of the country?” Mutiny is a strong word, but this seemed to verge upon it. Still, Webb and Mueller agreed to meet with the lunar cabal in Houston on August 22. After a contentious meeting, Webb agreed to proceed with the plan and to present it to President Johnson, who was almost certain to approve it, having great confidence in Webb’s management of NASA. The mission was on.

It was only then that Borman and his crewmembers Lovell and Anders learned of their reassignment. While Anders was disappointed at the prospect of being the Lunar Module Pilot on a mission with no Lunar Module, the prospect of being on the first flight to the Moon and entrusted with observation and photography of lunar landing sites more than made up for it. They plunged into an accelerated training program to get ready for the mission.

NASA approached the mission with its usual “can-do” approach and public confidence, but everybody involved was acutely aware of the risks that were being taken. Susan Borman, Frank’s wife, privately asked Chris Kraft, director of Flight Operations and part of the group who advocated sending Apollo 8 to the Moon, with a reputation as a plain-talking straight shooter, “I really want to know what you think their chances are of coming home.” Kraft responded, “You really mean that, don’t you?” “Yes,” she replied, “and you know I do.” Kraft answered, “Okay. How’s fifty-fifty?” Those within the circle, including the crew, knew what they were biting off.

The launch was scheduled for December 21, 1968. Everybody would be working through Christmas, including the twelve ships and thousands of sailors in the recovery fleet, but lunar launch windows are set by the constraints of celestial mechanics, not human holidays. In November, the Soviets had flown Zond 6, and it had demonstrated the “double dip” re-entry trajectory required for human lunar missions. There were two system failures which killed the animal test subjects on board, but these were covered up and the mission heralded as a great success. From what NASA knew, it was entirely possible the next launch would be with cosmonauts bound for the Moon.

Space launches were exceptional public events in the 1960s, and the first flight of men to the Moon, just about a hundred years after Jules Verne envisioned three men setting out for the Moon from central Florida in a “cylindro-conical projectile” in De la terre à la lune (From the Earth to the Moon), similarly engaging the world, the launch of Apollo 8 attracted around a quarter of a million people to watch the spectacle in person and hundreds of millions watching on television both in North America and around the globe, thanks to the newfangled technology of communication satellites.  Let’s tune in to CBS television and relive this singular event with Walter Cronkite.  (For one of those incomprehensible reasons in the Internet of Trash, this video, for which YouTube will happily generate an embed code, fails to embed in WordPress.  You’ll have to click the link below to view it.)

CBS coverage of the Apollo 8 launch

Now we step inside Mission Control and listen in on the Flight Director’s audio loop during the launch, illustrated with imagery and simulations.

The Saturn V performed almost flawlessly. During the second stage burn mild pogo oscillations began but, rather than progressing to the point where they almost tore the rocket apart as had happened on the previous Saturn V launch, von Braun’s team’s fixes kicked in and seconds later Borman reported, “Pogo’s damping out.” A few minutes later Apollo 8 was in Earth orbit.

Jim Lovell had sixteen days of spaceflight experience across two Gemini missions, one of them Gemini 7 where he endured almost two weeks in orbit with Frank Borman. Bill Anders was a rookie, on his first space flight. Now weightless, all three were experiencing a spacecraft nothing like the cramped Mercury and Gemini capsules which you put on as much as boarded. The Apollo command module had an interior volume of six cubic metres (218 cubic feet, in the quaint way NASA reckons things) which may not seem like much for a crew of three, but in weightlessness, with every bit of space accessible and usable, felt quite roomy. There were five real windows, not the tiny portholes of Gemini, and plenty of space to move from one to another.

With all this roominess and mobility came potential hazards, some verging on slapstick, but, in space, serious nonetheless. NASA safety personnel had required the astronauts to wear life vests over their space suits during the launch just in case the Saturn V malfunctioned and they ended up in the ocean. While moving around the cabin to get to the navigation station after reaching orbit, Lovell, who like the others hadn’t yet removed his life vest, snagged its activation tab on a strut within the cabin and it instantly inflated. Lovell looked ridiculous and the situation comical, but it was no laughing matter. The life vests were inflated with carbon dioxide which, if released in the cabin, would pollute their breathing air and removal would use up part of a CO₂ scrubber cartridge, of which they had a limited supply on board. Lovell finally figured out what to do. After being helped out of the vest, he took it down to the urine dump station in the lower equipment bay and vented it into a reservoir which could be dumped out into space. One problem solved, but in space you never know what the next surprise might be.

The astronauts wouldn’t have much time to admire the Earth through those big windows. Over Australia, just short of three hours after launch, they would re-light the engine on the third stage of the Saturn V for the “trans-lunar injection” (TLI) burn of 318 seconds, which would accelerate the spacecraft to just slightly less than escape velocity, raising its apogee so it would be captured by the Moon’s gravity. After housekeeping (presumably including the rest of the crew taking off those pesky life jackets, since there weren’t any wet oceans where they were going) and reconfiguring the spacecraft and its computer for the maneuver, they got the call from Houston, “You are go for TLI.” They were bound for the Moon.

The third stage, which had failed to re-light on its last outing, worked as advertised this time, with a flawless burn. Its job was done; from here on the astronauts and spacecraft were on their own. The booster had placed them on a free-return trajectory. If they did nothing (apart from minor “trajectory correction maneuvers” easily accomplished by the spacecraft’s thrusters) they would fly out to the Moon, swing around its far side, and use its gravity to slingshot back to the Earth (as Lovell would do two years later when he commanded Apollo 13, although there the crew had to use the engine of the LM to get back onto a free-return trajectory after the accident).

Apollo 8 rapidly climbed out of the Earth’s gravity well, trading speed for altitude, and before long the astronauts beheld a spectacle no human eyes had glimpsed before: an entire hemisphere of Earth at once, floating in the inky black void. On board, there were other concerns: Frank Borman was puking his guts out and having difficulties with the other end of the tubing as well. Borman had logged more than six thousand flight hours in his career as a fighter and test pilot, most of it in high-performance jet aircraft, and fourteen days in space on Gemini 7 without any motion sickness. Many people feel queasy when they experience weightlessness the first time, but this was something entirely different and new in the American space program. And it was very worrisome. The astronauts discussed the problem on private tapes they could downlink to Mission Control without broadcasting to the public, and when NASA got around to playing the tapes, the chief flight surgeon, Dr. Charles Berry, became alarmed.

As he saw it, there were three possibilities: motion sickness, a virus of some kind, or radiation sickness. On its way to the Moon, Apollo 8 passed directly through the Van Allen radiation belts, spending two hours in this high radiation environment, the first humans to do so. The total radiation dose was estimated as roughly the same as one would receive from a chest X-ray, but the composition of the radiation was different and the exposure was over an extended time, so nobody could be sure it was safe. The fact that Lovell and Anders had experienced no symptoms argued against the radiation explanation. Berry concluded that a virus was the most probable cause and, based upon the mission rules said, “I’m recommending that we consider canceling the mission.” The risk of proceeding with the commander unable to keep food down and possibly carrying a virus which the other astronauts might contract was too great in his opinion. This recommendation was passed up to the crew. Borman, usually calm and collected even by astronaut standards, exclaimed, “What? That is pure, unadulterated horseshit.” The mission would proceed, and within a day his stomach had settled.

This was the first case of space adaptation syndrome to afflict an American astronaut. (Apparently some Soviet cosmonauts had been affected, but this was covered up to preserve their image as invincible exemplars of the New Soviet Man.) It is now known to affect around a third of people experiencing weightlessness in environments large enough to move around, and spontaneously clears up in two to four (miserable) days.

The two most dramatic and critical events in Apollo 8’s voyage would occur on the far side of the Moon, with 3500 km of rock between the spacecraft and the Earth totally cutting off all communications. The crew would be on their own, aided by the computer and guidance system and calculations performed on the Earth and sent up before passing behind the Moon. The first would be lunar orbit insertion (LOI), scheduled for 69 hours and 8 minutes after launch. The big Service Propulsion System (SPS) engine (it was so big—twice as large as required for Apollo missions as flown—because it was designed to be able to launch the entire Apollo spacecraft from the Moon if a “direct ascent” mission mode had been selected) would burn for exactly four minutes and seven seconds to bend the spacecraft’s trajectory around the Moon into a closed orbit around that world.

If the SPS failed to fire for the LOI burn, it would be a huge disappointment but survivable. Apollo 8 would simply continue on its free-return trajectory, swing around the Moon, and fall back to Earth where it would perform a normal re-entry and splashdown. But if the engine fired and cut off too soon, the spacecraft would be placed into an orbit which would not return them to Earth, marooning the crew in space to die when their supplies ran out. If it burned just a little too long, the spacecraft’s trajectory would intersect the surface of the Moon—lithobraking is no way to land on the Moon.

When the SPS engine shut down precisely on time and the computer confirmed the velocity change of the burn and orbital parameters, the three astronauts were elated, but they were the only people in the solar system aware of the success. Apollo 8 was still behind the Moon, cut off from communications. The first clue Mission Control would have of the success or failure of the burn would be when Apollo 8’s telemetry signal was reacquired as it swung around the limb of the Moon. If too early, it meant the burn had failed and the spacecraft was coming back to Earth; that moment passed with no signal. Now tension mounted as the clock ticked off the seconds to the time expected for a successful burn. If that time came and went with no word from Apollo 8, it would be a really bad day. Just on time, the telemetry signal locked up and Jim Lovell reported, “Go ahead, Houston, this is Apollo 8. Burn complete. Our orbit 160.9 by 60.5.” (Lovell was using NASA’s preferred measure of nautical miles; in proper units it was 311 by 112 km. The orbit would subsequently be circularised by another SPS burn to 112.7 by 114.7 km.) The Mission Control room erupted into an un-NASA-like pandemonium of cheering.

Apollo 8 would orbit the Moon ten times, spending twenty hours in a retrograde orbit with an inclination of 12 degrees to the lunar equator, which would allow it to perform high-resolution photography of candidate sites for early landing missions under lighting conditions similar to those expected at the time of landing. In addition, precision tracking of the spacecraft’s trajectory in lunar orbit would allow mapping of the Moon’s gravitational field, including the “mascons” which perturb the orbits of objects in low lunar orbits and would be important for longer duration Apollo orbital missions in the future.

During the mission, the crew were treated to amazing sights and, in particular, the dramatic difference between the near side, with its many flat “seas”, and the rugged highlands of the far side. Coming around the Moon they saw the spectacle of earthrise for the first time and, hastily grabbing a magazine of colour film and setting aside the planned photography schedule, Bill Anders snapped the photo of the Earth rising above the lunar horizon which became one of the most iconic photographs of the twentieth century. Here is a reconstruction of the moment that photo was taken.

On the ninth and next-to-last orbit, the crew conducted a second television transmission which was broadcast worldwide. It was Christmas Eve on much of the Earth, and, coming at the end of the chaotic, turbulent, and often tragic year of 1968, it was a magical event, remembered fondly by almost everybody who witnessed it and felt pride for what the human species had just accomplished.

You have probably heard this broadcast from the Moon, often with the audio overlaid on imagery of the Moon from later missions, with much higher resolution than was actually seen in that broadcast. Here, in three parts, is what people, including this scrivener, actually saw on their televisions that enchanted night. The famous reading from Genesis is in the third part. This description is eerily similar to that in Jules Verne’s 1870 Autour de la lune.

After the end of the broadcast, it was time to prepare for the next and absolutely crucial maneuver, also performed on the far side of the Moon: trans-Earth injection, or TEI. This would boost the spacecraft out of lunar orbit and send it back on a trajectory to Earth. This time the SPS engine had to work, and perfectly. If it failed to fire, the crew would be trapped in orbit around the Moon with no hope of rescue. If it cut off too soon or burned too long, or the spacecraft was pointed in the wrong direction when it fired, Apollo 8 would miss the Earth and orbit forever far from its home planet or come in too steep and burn up when it hit the atmosphere. Once again the tension rose to a high pitch in Mission Control as the clock counted down to the two fateful times: this time they’d hear from the spacecraft earlier if it was on its way home and later or not at all if things had gone tragically awry. Exactly when expected, the telemetry screens came to life and a second later Jim Lovell called, “Houston, Apollo 8. Please be informed there is a Santa Claus.”

Now it was just a matter of falling the 375,000 kilometres from the Moon, hitting the precise re-entry corridor in the Earth’s atmosphere, executing the intricate “double dip” re-entry trajectory, and splashing down near the aircraft carrier which would retrieve the Command Module and crew. Earlier unmanned tests gave confidence it would all work, but this was the first time men would be trying it.

There was some unexpected and embarrassing excitement on the way home. Mission Control had called up a new set of co-ordinates for the “barbecue roll” which the spacecraft executed to even out temperature. Lovell was asked to enter “verb 3723, noun 501” into the computer. But, weary and short on sleep, he fat-fingered the commands and entered “verb 37, noun 01”. This told the computer the spacecraft was back on the launch pad, pointing straight up, and it immediately slewed to what it thought was that orientation. Lovell quickly figured out what he’d done, “It was my goof”, but by this time he’d “lost the platform”: the stable reference the guidance system used to determine in which direction the spacecraft was pointing in space. He had to perform a manual alignment, taking sightings on a number of stars, to recover the correct orientation of the stable platform. This was completely unplanned but, as it happens, in doing so Lovell acquired experience that would prove valuable when he had to perform the same operation in much more dire circumstances on Apollo 13 after an explosion disabled the computer and guidance system in the Command Module. Here is the author of the book, Jeffrey Kluger, discussing Jim Lovell’s goof.

The re-entry went completely as planned, flown entirely under computer control, with the spacecraft splashing into the Pacific Ocean just 6 km from the aircraft carrier Yorktown. But because the splashdown occurred before dawn, it was decided to wait until the sky brightened to recover the crew and spacecraft. Forty-three minutes after splashdown, divers from the Yorktown arrived at the scene, and forty-five minutes after that the crew was back on the ship. Apollo 8 was over, a total success. This milestone in the space race had been won definitively by the U.S., and shortly thereafter the Soviets abandoned their Zond circumlunar project, judging it an anticlimax and admission of defeat to fly by the Moon after the Americans had already successfully orbited it.

This is the official NASA contemporary documentary about Apollo 8.

Here is an evening with the Apollo 8 astronauts recorded at the National Air and Space Museum on 2008-11-13 to commemorate the fortieth anniversary of the flight.

This is a reunion of the Apollo 8 astronauts on 2009-04-23.

As of this writing, all of the crew of Apollo 8 are alive, and, in a business where divorce was common, remain married to the women they wed as young military officers.

Kluger, Jeffrey. Apollo 8. New York: Picador, 2017. ISBN 978-1-250-18251-7.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Defense Distributed Declaration

In the ongoing litigation between Defense Distributed and state attorneys general over the distribution of three-dimensional models of firearms and components thereof over the Internet (which has been approved by all federal regulatory agencies), I was asked to submit an affidavit in support of the Defense Distributed case.  I have previously described this case here in my post “Code is Speech”.

Here is what I drafted which, after consultation with others whose efforts are much appreciated but will remain unnamed, will be submitted into the public record.  This is exactly what was submitted, less my signature: why make it easy for identity thieves?  This was submitted, as is done, in gnarly monospaced text with no mark-up.  If it shows up in your browser with awkward line breaks, try making the browser window wider and it should get better.   If you’re on a tablet or mobile phone, try it when you get back to the desktop.

The opening and closing paragraphs are as prescribed in 28 U.S.C. § 1746 for an “Unsworn declaration under penalty of perjury” by a non-U.S. person.  This is also called a “self-authenticating affidavit”.

This may seem lukewarm to those accustomed to my usual firebrand rhetoric.  In this declaration, I only wanted to state things which I knew or believed based upon my own personal experience.  Consequently, I eschewed discussing the state of the art in additive manufacturing (I have never actually seen nor used an additive manufacturing machine) or the limitations of present-day machines (all of that may, and probably will, change in a few years).

Attorneys for Defense Distributed expect to lose in the original district court litigation and the Ninth Circuit, but the purpose of this declaration is to be used in higher court appeals where there is a less ideological and more fact-based scrutiny of cases.

Although I really had better things to do this week, I was glad to take the time to support the Defense Distributed case.  Even if you don’t care about guns, the attorneys’ general position in this case argues that computer-mediated speech: the transmission of files from computer to computer, is not speech protected by the First Amendment.  This is arguably the greatest assault on free speech since the adoption of that amendment.

I am privileged to have the opportunity to oppose it.

(This declaration is a public document which will become part of the record of the trial and eventual appeals.  I am disclosing nothing here which will not be available to those following the litigation.)

                DECLARATION OF JOHN WALKER

I, John Walker, pursuant to 28 U.S.C. § 1746 hereby declare and
say as follows:

    1.  I was a co-founder of Autodesk, Inc. (ADSK:NASDAQ),
        developer of the AutoCAD® computer-aided design
        software.  I was president, chairman, and chief
        executive officer from the incorporation of the company
        in April 1982 until November 1986, more than a year
        after its initial public stock offering in June 1985. I
        continued to serve as chairman of the board of directors
        until April 1988, after which I concentrated on software
        development.

    2.  Autodesk is the developer of the AutoCAD® software, one
        of the most widely-used computer-aided design and
        drafting software packages in the world.  AutoCAD allows
        creation of two- and three-dimensional models of designs
        and, with third-party products, their analysis and
        fabrication.

    3.  During the start-up phase of Autodesk, I was one of the
        three principal software developers of AutoCAD and wrote
        around one third of the source code of the initial
        release of the program.

    4.  Subsequently, I contributed to the development of
        three-dimensional extensions of the original AutoCAD
        drafting system, was lead developer on AutoShade[tm],
        which produced realistic renderings of three-dimensional
        models, and developed the prototype of integration of
        constructive solid geometry into AutoCAD, which was
        subsequently marketed as the AutoCAD Advanced Modeling
        Extension (AME).

    5.  I retired from Autodesk in 1994 and since have had no
        connection with the company other than as a shareholder
        with less than 5% ownership of the company's common
        stock.

    Design Versus Fabrication

    6.  From my experience at Autodesk, I became aware of the
        distinction between the design of an object and the
        fabrication of that object from the design.  For
        example, the patent drawings and written description in
        firearms patents provide sufficient information "as to
        enable any person skilled in the art to which it
        pertains, or with which it is most nearly connected, to
        make and use the same, and shall set forth the best mode
        contemplated by the inventor or joint inventor of
        carrying out the invention" [35 U.S.C. § 112 (a)].  But
        this is in no way a mechanical process.  One must
        interpret the design, choose materials suitable for each
        component, and then decide which manufacturing process
        (milling, stamping, turning, casting, etc.) is best to
        produce it, including steps such as heat-treating and
        the application of coatings.  This process is called
        "production planning", and it is a human skill that is
        required to turn a design, published in a patent
        description or elsewhere, into a physical realisation of
        the object described by that design.

    7.  A three-dimensional model of an object specifies its
        geometry but does not specify the materials from which
        it is fabricated, how the fabrication is done, or any
        special steps required (for example, annealing or other
        heat treating, coatings, etc.) before the component is
        assembled into the design.

    8.  Three-dimensional models of physical objects have many
        other applications than computer-aided manufacturing.
        Three-dimensional models are built to permit analysis of
        designs including structural strength and heat flow via
        the finite element method.  Models permit rendering of
        realistic graphic images for product visualisation,
        illustration, and the production of training and service
        documentation.  Models can be used in simulations to
        study the properties and operation of designs prior to
        physically manufacturing them. Models for finite element
        analysis have been built since the 1960s, decades before
        the first additive manufacturing machines were
        demonstrated in the 1980s.

    9.  Some three-dimensional models contain information which
        goes well beyond a geometric description of an object
        for manufacturing.  For example, it is common to produce
        "parametric" models which describe a family of objects
        which can be generated by varying a set of inputs
        ("parameters").  For example, a three-dimensional model
        of a shoe could be parameterised to generate left and
        right shoes of various sizes and widths, with
        information within the model automatically adjusting the
        dimensions of the components of the shoe accordingly.
        The model is thus not the rote expression of a
        particular manufactured object but rather a description
        of a potentially unlimited number of objects where the
        intent of the human designer, in setting the parameters,
        determines the precise geometry of an object built from
        the model.

   10.  A three-dimensional model often expresses relationships
        among components of the model which facilitate analysis
        and parametric design.  Such a model can be thought of
        like a spreadsheet, in which the value of cells are
        determined by their mathematical relationships to other
        cells, as opposed to a static table of numbers printed
        on paper.

    Additive Manufacturing ("3D Printing")

   11.  Additive manufacturing (often called, confusingly, "3D
        [for three-dimensional] printing") is a technology by
        which objects are built to the specifications of a
        three-dimensional computer model by a device which
        fabricates the object by adding material according to
        the design.  Most existing additive manufacturing
        devices can only use a single material in a production
        run, which limits the complexity of objects they can
        fabricate.

   12.  Additive manufacturing, thus, builds up a part by adding
        material, while subtractive manufacturing (for example,
        milling, turning, and drilling) starts with a block of
        solid material and cuts away until the desired part is
        left.  Many machine shops have tools of both kinds, and
        these tools may be computer controlled.

   13.  Additive manufacturing is an alternative to traditional
        kinds of manufacturing such as milling, turning, and
        cutting.  With few exceptions, any object which can be
        produced by additive manufacturing can be produced, from
        paper drawings or their electronic equivalent, with
        machine tools that date from the 19th century.  Additive
        manufacturing is simply another machine tool, and the
        choice of whether to use it or other tools is a matter
        of economics and the properties of the part being
        manufactured.

   14.  Over time, machine tools have become easier to use.  The
        introduction of computer numerical control (CNC) machine
        tools has dramatically reduced the manual labour
        required to manufacture parts from a design.  The
        computer-aided design industry, of which Autodesk is a
        part, has, over the last half-century, reduced the cost
        of going from concept to manufactured part, increasing
        the productivity and competitiveness of firms which
        adopt it and decreasing the cost of products they make.
        Additive manufacturing is one of a variety of CNC
        machine tools in use today.

   15.  It is in no sense true that additive manufacturing
        allows the production of functional objects such as
        firearms from design files without human intervention.
        Just as a human trying to fabricate a firearm from its
        description in a patent filing (available in electronic
        form, like the additive manufacturing model), one must
        choose the proper material, its treatment, and how it is
        assembled into the completed product.  Thus, an additive
        manufacturing file describing the geometry of a
        component of a firearm is no more an actual firearm than
        a patent drawing of a firearm (published worldwide in
        electronic form by the U.S. Patent and Trademark Office)
        is a firearm.

    Computer Code and Speech

   16.  Computer programs and data files are indistinguishable
        from speech.  A computer file, including a
        three-dimensional model for additive manufacturing, can
        be expressed as text which one can print in a newspaper
        or pamphlet, declaim from a soapbox, or distribute via
        other media.  It may be boring to those unacquainted
        with its idioms, but it is speech nonetheless.  There is
        no basis on which to claim that computer code is not
        subject to the same protections as verbal speech or
        printed material.

   17.  For example, the following is the definition of a unit
        cube in the STL language used to to express models for
        many additive manufacturing devices.

            solid cube_corner
              facet normal 0.0 -1.0 0.0
                outer loop
                  vertex 0.0 0.0 0.0
                  vertex 1.0 0.0 0.0
                  vertex 0.0 0.0 1.0
                endloop
              endfacet
            endsolid

        This text can be written, read, and understood by a
        human familiar with the technology as well as by a
        computer.  It is entirely equivalent to a description of
        a unit cube written in English or another human
        language.  When read by a computer, it can be used for
        structural analysis, image rendering, simulation, and
        other applications as well as additive manufacturing.
        The fact that the STL language can be read by a computer
        in no way changes the fact that it is text, and thus,
        speech.

   18.  As an additional example, the following is an AutoCAD
        DXF[tm] file describing a two-dimensional line between
        the points (0, 0) and (1, 1), placed on layer 0 of a
        model.

            0
            SECTION
              2
            ENTITIES
              0
            LINE
              8
            0
             10
            0.0
            20
            0.0
            11
            1.0
            21
            1.0
              0
            ENDSEC
              0
            EOF

        Again, while perhaps not as easy to read as the STL file
        until a human has learned the structure of the file,
        this is clearly text, and thus speech.

   19.  It is common in computer programming and computer-aided
        design to consider computer code and data files written
        in textual form as simultaneously communicating to
        humans and computers.  Donald E. Knuth, professor
        emeritus of computer science at Stanford University and
        author of "The Art of Computer Programming", advised
        programmers:
            "Instead of imagining that our main task is to
            instruct a computer what to do, let us concentrate
            rather on explaining to human beings what we want a
            computer to do."[Knuth 1992]
        A design file, such as those illustrated above in
        paragraphs 17 and 18 is, similarly, a description of a
        design to a human as well as to a computer.  If it is a
        description of a physical object, a human machinist
        could use it to manufacture the object just as the
        object could be fabricated from the verbal description
        and drawings in a patent.

   20.  Computer code has long been considered text
        indistinguishable from any other form of speech in
        written form.  Many books, consisting in substantial
        part of computer code, have been published and are
        treated for the purpose of copyright and other
        intellectual property law like any other literary work.
        For example the "Numerical Recipes"[Press] series of
        books presents computer code in a variety of programming
        languages which implements fundamental algorithms for
        numerical computation.

    Conclusions

   21.  There is a clear distinction between the design of an
        artefact, whether expressed in paper drawings, a written
        description, or a digital geometric model, and an object
        manufactured from that design.

   22.  Manufacturing an artefact from a design, however
        expressed, is a process involving human judgement in
        selecting materials and the tools used to fabricate
        parts from it.

   23.  Additive manufacturing ("3D printing") is one of a
        variety of tools which can be used to fabricate parts.
        It is in no way qualitatively different from alternative
        tools such as milling machines, lathes, drills, saws,
        etc., all of which can be computer controlled.

   24.  A digital geometric model of an object is one form of
        description which can guide its fabrication.  As such,
        it is entirely equivalent to, for example, a dimensioned
        drawing (blueprint) from which a machinist works.

   25.  Digital geometric models of objects can be expressed
        as text which can be printed on paper or read aloud
        as well as stored and transmitted electronically.
        Thus they are speech.

    References
        [Knuth 1992]   Knuth, Donald E.  Literate Programming.
                       Stanford, CA: Center for the Study of
                       Language and Information, 1992.
                       ISBN: 978-0-937073-80-3.

        [Press]        Press, William H. et al.  Numerical Recipes.
                       Cambridge (UK): Cambridge University Press,
                       (various dates).
                       Programming language editions:
                           C++     978-0-521-88068-8
                           C       978-0-521-43108-8
                           Fortran 978-0-521-43064-7
                           Pascal  978-0-521-37516-0

I declare under penalty of perjury under the laws of the United
States of America that the foregoing is true and correct.

Executed on November 22, 2018

                                            (Signature)
                                 _______________________________
                                           John Walker
Like 14+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Why I am grateful for the Reformation and Martin Luther

We have just had a big thread about the Catholic Church and it has brought something into focus for me. I do not believe the modern world would have happened without the Reformation. The Catholic Church supported Monarchy, centralized control, and a few men accumulating power.

As conservatives we talk about Scottish and French Enlightenments. Well, one was in a Catholic country and one was not. We know how they turned out. Northern Europe, with its Protestant Work Ethic has long been less corrupt in function that Southern, Catholic, Europe. Capitalism as we understand it was born in Northern Europe, with the Dutch, and later spread to England. Would a Catholic England have taken it up?

Further, it was primarily Protestants who settled America at first. The idea of religious tolerance was born from the friction of different Protestant faiths. Granted, it was also born of that fact there was a Church of England, but the marriage of Church and State is something that Church inherited from the Catholic Church, and America continued the Protestant move away from it. I do not believe that any such thing would have happened in a Catholic dominated America. Indeed, since the Catholic Church supported the Divine Right of Kings, it is hard to imagine that America ditching Kings at all. Maybe, even, not to rebel, since no matter how bad the King, rebelling is in violation of God’s law. Then again, the Catholic French did rebel against their King, so maybe those Americans would have too. One hopes with better results. In England, of course, there was a civil war over that Right. Would that have happened if the nation was still Catholic?

The Reformation also put pressure on the Catholic Church to reform. They don’t engage in people buying their way into Heaven anymore (I know buying out of Purgatory, but since you exit into Heaven, it is still buying your way into Heaven). 1517 Luther posted Thesis. 1567 They were banned. Certainly, in matters of temporal corruption around Monarchy it is much better.

It is clear to me that the world in which we live, the one with America as the Shining City on the Hill would not exist without the Reformation. I do not think that capitalism would have flourished, and with it, all the innovations. We would not have gone to the Moon, or have instant communications around the Earth. Progress would have been slowed, weighted down by an organization more concerned with maintaining its temporal power than with saving souls, as indulgences indicated. Thesis 82:

Why does not the pope liberate everyone from purgatory for the sake of love (a most holy thing) and because of the supreme necessity of their souls? This would be morally the best of all reasons. Meanwhile he redeems innumerable souls for money, a most perishable thing, with which to build St. Peter’s church, a very minor purpose.

I Praise God, and I mean that honestly, that 500 years ago, God inspired Martin Luther to take a stand against the corruption of the Catholic Church, and it allowed the great flourishing of Christianity in the world. Without the Protestant Reformation, there would have been no Adam Smith, no capitalism, No Scottish Enlightenment, no British Empire, and no United States of America.

Thank God, for Martin Luther and his great and grand courage to stand up for what was right against a corrupt regime intent on its own glorification, rather than the glorification of God.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Book Review: People’s Republic

“People's Republic” by Kurt SchlichterAs the third decade of the twenty-first century progressed, the Cold Civil War which had been escalating in the United States since before the turn of the century turned hot when a Democrat administration decided to impose their full agenda—gun confiscation, amnesty for all illegal aliens, restrictions on fossil fuels—all at once by executive order. The heartland defied the power grab and militias of the left and right began to clash openly. Although the senior officer corps were largely converged to the leftist agenda, the military rank and file which hailed largely from the heartland defied them, and could not be trusted to act against their fellow citizens. Much the same was the case with police in the big cities: they began to ignore the orders of their political bosses and migrate to jobs in more congenial jurisdictions.

With a low-level shooting war breaking out, the opposing sides decided that the only way to avert general conflict was, if not the “amicable divorce” advocated by Jesse Kelly, then a more bitter and contentious end to a union which was not working. The Treaty of Saint Louis split the country in two, with the east and west coasts and upper midwest calling itself the “People’s Republic of North America” (PRNA) and the remaining territory (including portions of some states like Washington, Oregon, and Indiana with a strong regional divide) continuing to call itself the United States, but with some changes: the capital was now Dallas, and the constitution had been amended to require any person not resident on its territory at the time of the Split (including children born thereafter) who wished full citizenship and voting rights to serve two years in the military with no “alternative service” for the privileged or connected.

The PRNA quickly implemented the complete progressive agenda wherever its rainbow flag (frequently revised as different victim groups clawed their way to the top of the grievance pyramid) flew. As police forces collapsed with good cops quitting and moving out, they were replaced by a national police force initially called the “People’s Internal Security Squads” (later the “People’s Security Force” when the acronym for the original name was deemed infelicitous), staffed with thugs and diversity hires attracted by the shakedown potential of carrying weapons among a disarmed population.

Life in the PRNA was pretty good for the coastal élites in their walled communities, but as with collectivism whenever and wherever it is tried, for most of the population life was a grey existence of collapsing services, food shortages, ration cards, abuse by the powerful, and constant fear of being denounced for violating the latest intellectual fad or using an incorrect pronoun. And, inevitably, it wasn’t long before the PRNA slammed the door shut to keep the remaining competent people from fleeing to where they were free to use their skills and keep what they’d earned. Mexico built a “big, beautiful wall” to keep hordes of PRNA subjects from fleeing to freedom and opportunity south of the border.

Several years after the Split, Kelly Turnbull, retired military and veteran of the border conflicts around the Split paid the upkeep of his 500 acre non-working ranch by spiriting people out of the PRNA to liberty in the middle of the continent. After completing a harrowing mission which almost ended in disaster, he is approached by a wealthy and politically-connected Dallas businessman who offers him enough money to retire if he’ll rescue his daughter who, indoctrinated by the leftist infestation still remaining at the university in Austin, defected to the PRNA and is being used in propaganda campaigns there at the behest of the regional boss of the secret police. In addition, a spymaster tasks him with bringing out evidence which will allow rolling up the PRNAs informer and spy networks. Against his self-preservation instinct which counsels laying low until the dust settles from the last mission, he opts for the money and prospect of early retirement and undertakes the mission.

As Turnbull covertly enters the People’s Republic, makes his way to Los Angeles, and seeks his target, there is a superbly-sketched view of an America in which the progressive agenda has come to fruition, and one which people there may well be living at the end of the next two Democrat-dominated administrations. It is often funny, as the author skewers the hypocrisy of the slavers mouthing platitudes they don’t believe for a femtosecond. (If you think it improper to make fun of human misery, recall the mordant humour in the Soviet Union as workers mocked the reality of the “workers’ paradise”.) There’s plenty of tension and action, and sometimes following Turnbull on his mission seems like looking over the shoulder of a first-person-shooter. He’s big on countdowns and tends to view “blues” obstructing him as NPCs to be dealt with quickly and permanently: “I don’t much like blues. You kill them or they kill you.”

This is a satisfying thriller which is probably a more realistic view of the situation in a former United States than an amicable divorce with both sides going their separate ways. The blue model is doomed to collapse, as it already has begun to in the big cites and states where it is in power, and with that inevitable collapse will come chaos and desperation which spreads beyond its borders. With Democrat politicians such as Occasional-Cortex who, a few years ago, hid behind such soothing labels as “liberal” or “progressive” now openly calling themselves “democratic socialists”, this is not just a page-turning adventure but a cautionary tale of the future should they win (or steal) power.

A prequel, Indian Country, which chronicles insurgency on the border immediately after the Split as guerrilla bands of the sane rise to resist the slavers, is now available.

Schlichter, Kurt. People’s Republic. Seattle: CreateSpace, 2016. ISBN 978-1-5390-1895-7.

Like 15+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

TOTD 2018-11-7: Borderline Shooting

The shooter has been identified as Ian David Long. He was a 28 year old former Marine. The motives are unclear.

It is sad to see young people out having a good time not make it home this way. The hearts of many families are broken. What took a few seconds will take a lifetime to deal with. The pain is more intense because of the senselessness.

How does one stop something like this happening? Do we just shrug our shoulders and say life happens?


Users who have liked this post:

  • avatar

Saturday Night Science: The Forgotten Genius of Oliver Heaviside

“The Forgotten Genius of Oliver Heaviside” by Basil MahonAt age eleven, in 1861, young Oliver Heaviside’s family, supported by his father’s irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family’s house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid’s geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.

At age fifteen he took the College of Perceptors examination, the equivalent of today’s A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.

His mother’s elder sister had married Charles Wheatstone, a successful and wealthy scientist, inventor, and entrepreneur whose inventions include the concertina, the stereoscope, and the Playfair encryption cipher, and who made major contributions to the development of telegraphy. Wheatstone took an interest in his bright nephew, and guided his self-studies after leaving school, encouraging him to master the Morse code and the German and Danish languages. Oliver’s favourite destination was the library, which he later described as “a journey into strange lands to go a book-tasting”. He read the original works of Newton, Laplace, and other “stupendous names” and discovered that with sufficient diligence he could figure them out on his own.

At age eighteen, he took a job as an assistant to his older brother Arthur, well-established as a telegraph engineer in Newcastle. Shortly thereafter, probably on the recommendation of Wheatstone, he was hired by the just-formed Danish-Norwegian-English Telegraph Company as a telegraph operator at a salary of £150 per year (around £12000 in today’s money). The company was about to inaugurate a cable under the North Sea between England and Denmark, and Oliver set off to Jutland to take up his new post. Long distance telegraphy via undersea cables was the technological frontier at the time—the first successful transatlantic cable had only gone into service two years earlier, and connecting the continents into a world-wide web of rapid information transfer was the booming high-technology industry of the age. While the job of telegraph operator might seem a routine clerical task, the élite who operated the undersea cables worked in an environment akin to an electrical research laboratory, trying to wring the best performance (words per minute) from the finicky and unreliable technology.

Heaviside prospered in the new job, and after a merger was promoted to chief operator at a salary of £175 per year and transferred back to England, at Newcastle. At the time, undersea cables were unreliable. It was not uncommon for the signal on a cable to fade and then die completely, most often due to a short circuit caused by failure of the gutta-percha insulation between the copper conductor and the iron sheath surrounding it. When a cable failed, there was no alternative but to send out a ship which would find the cable with a grappling hook, haul it up to the surface, cut it, and test whether the short was to the east or west of the ship’s position (the cable would work in the good direction but fail in that containing the short. Then the cable would be re-spliced, dropped back to the bottom, and the ship would set off in the direction of the short to repeat the exercise over and over until, by a process similar to binary search, the location of the fault was narrowed down and that section of the cable replaced. This was time consuming and potentially hazardous given the North Sea’s propensity for storms, and while the cable remained out of service it made no money for the telegraph company.

Heaviside, who continued his self-study and frequented the library when not at work, realised that knowing the resistance and length of the functioning cable, which could be easily measured, it would be possible to estimate the location of the short simply by measuring the resistance of the cable from each end after the short appeared. He was able to cancel out the resistance of the fault, creating a quadratic equation which could be solved for its location. The first time he applied this technique his bosses were sceptical, but when the ship was sent out to the location he predicted, 114 miles from the English coast, they quickly found the short circuit.

At the time, most workers in electricity had little use for mathematics: their trade journal, The Electrician (which would later publish much of Heaviside’s work) wrote in 1861, “In electricity there is seldom any need of mathematical or other abstractions; and although the use of formulæ may in some instances be a convenience, they may for all practical purpose be dispensed with.” Heaviside demurred: while sharing disdain for abstraction for its own sake, he valued mathematics as a powerful tool to understand the behaviour of electricity and attack problems of great practical importance, such as the ability to send multiple messages at once on the same telegraphic line and increase the transmission speed on long undersea cable links (while a skilled telegraph operator could send traffic at thirty words per minute on intercity land lines, the transatlantic cable could run no faster than eight words per minute). He plunged into calculus and differential equations, adding them to his intellectual armamentarium.

He began his own investigations and experiments and began to publish his results, first in English Mechanic, and then, in 1873, the prestigious Philosophical Magazine, where his work drew the attention of two of the most eminent workers in electricity: William Thomson (later Lord Kelvin) and James Clerk Maxwell. Maxwell would go on to cite Heaviside’s paper on the Wheatstone Bridge in the second edition of his Treatise on Electricity and Magnetism, the foundation of the classical theory of electromagnetism, considered by many the greatest work of science since Newton’s Principia, and still in print today. Heady stuff, indeed, for a twenty-two year old telegraph operator who had never set foot inside an institution of higher education.

Heaviside regarded Maxwell’s Treatise as the path to understanding the mysteries of electricity he encountered in his practical work and vowed to master it. It would take him nine years and change his life. He would become one of the first and foremost of the “Maxwellians”, a small group including Heaviside, George FitzGerald, Heinrich Hertz, and Oliver Lodge, who fully grasped Maxwell’s abstract and highly mathematical theory (which, like many subsequent milestones in theoretical physics, predicted the results of experiments without providing a mechanism to explain them, such as earlier concepts like an “electric fluid” or William Thomson’s intricate mechanical models of the “luminiferous ether”) and built upon its foundations to discover and explain phenomena unknown to Maxwell (who would die in 1879 at the age of just 48).

While pursuing his theoretical explorations and publishing papers, Heaviside tackled some of the main practical problems in telegraphy. Foremost among these was “duplex telegraphy”: sending messages in each direction simultaneously on a single telegraph wire. He invented a new technique and was even able to send two messages at the same time in both directions as fast as the operators could send them. This had the potential to boost the revenue from a single installed line by a factor of four. Oliver published his invention, and in doing so made an enemy of William Preece, a senior engineer at the Post Office telegraph department, who had invented and previously published his own duplex system (which would not work), that was not acknowledged in Heaviside’s paper. This would start a feud between Heaviside and Preece which would last the rest of their lives and, on several occasions, thwart Heaviside’s ambition to have his work accepted by mainstream researchers. When he applied to join the Society of Telegraph Engineers, he was rejected on the grounds that membership was not open to “clerks”. He saw the hand of Preece and his cronies at the Post Office behind this and eventually turned to William Thomson to back his membership, which was finally granted.

By 1874, telegraphy had become a big business and the work was increasingly routine. In 1870, the Post Office had taken over all domestic telegraph service in Britain and, as government is wont to do, largely stifled innovation and experimentation. Even at privately-owned international carriers like Oliver’s employer, operators were no longer concerned with the technical aspects of the work but rather tending automated sending and receiving equipment. There was little interest in the kind of work Oliver wanted to do: exploring the new horizons opened up by Maxwell’s work. He decided it was time to move on. So, he quit his job, moved back in with his parents in London, and opted for a life as an independent, unaffiliated researcher, supporting himself purely by payments for his publications.

With the duplex problem solved, the largest problem that remained for telegraphy was the slow transmission speed on long lines, especially submarine cables. The advent of the telephone in the 1870s would increase the need to address this problem. While telegraphic transmission on a long line slowed down the speed at which a message could be sent, with the telephone voice became increasingly distorted the longer the line, to the point where, after around 100 miles, it was incomprehensible. Until this was understood and a solution found, telephone service would be restricted to local areas.

Many of the early workers in electricity thought of it as something like a fluid, where current flowed through a wire like water through a pipe. This approximation is more or less correct when current flow is constant, as in a direct current generator powering electric lights, but when current is varying a much more complex set of phenomena become manifest which require Maxwell’s theory to fully describe. Pioneers of telegraphy thought of their wires as sending direct current which was simply switched off and on by the sender’s key, but of course the transmission as a whole was a varying current, jumping back and forth between zero and full current at each make or break of the key contacts. When these transitions are modelled in Maxwell’s theory, one finds that, depending upon the physical properties of the transmission line (its resistance, inductance, capacitance, and leakage between the conductors) different frequencies propagate along the line at different speeds. The sharp on/off transitions in telegraphy can be thought of, by Fourier transform, as the sum of a wide band of frequencies, with the result that, when each propagates at a different speed, a short, sharp pulse sent by the key will, at the other end of the long line, be “smeared out” into an extended bump with a slow rise to a peak and then decay back to zero. Above a certain speed, adjacent dots and dashes will run into one another and the message will be undecipherable at the receiving end. This is why operators on the transatlantic cables had to send at the painfully slow speed of eight words per minute.

In telephony, it’s much worse because human speech is composed of a broad band of frequencies, and the frequencies involved (typically up to around 3400 cycles per second) are much higher than the off/on speeds in telegraphy. The smearing out or dispersion as frequencies are transmitted at different speeds results in distortion which renders the voice signal incomprehensible beyond a certain distance.

In the mid-1850s, during development of the first transatlantic cable, William Thomson had developed a theory called the “KR law” which predicted the transmission speed along a cable based upon its resistance and capacitance. Thomson was aware that other effects existed, but without Maxwell’s theory (which would not be published in its final form until 1873), he lacked the mathematical tools to analyse them. The KR theory, which produced results that predicted the behaviour of the transatlantic cable reasonably well, held out little hope for improvement: decreasing the resistance and capacitance of the cable would dramatically increase its cost per unit length.

Heaviside undertook to analyse what is now called the transmission line problem using the full Maxwell theory and, in 1878, published the general theory of propagation of alternating current through transmission lines, what are now called the telegrapher’s equations. Because he took resistance, capacitance, inductance, and leakage all into account and thus modelled both the electric and magnetic field created around the wire by the changing current, he showed that by balancing these four properties it was possible to design a transmission line which would transmit all frequencies at the same speed. In other words, this balanced transmission line would behave for alternating current (including the range of frequencies in a voice signal) just like a simple wire did for direct current: the signal would be attenuated (reduced in amplitude) with distance but not distorted.

In an 1887 paper, he further showed that existing telegraph and telephone lines could be made nearly distortionless by adding loading coils to increase the inductance at points along the line (as long as the distance between adjacent coils is small compared to the wavelength of the highest frequency carried by the line). This got him into another battle with William Preece, whose incorrect theory attributed distortion to inductance and advocated minimising self-inductance in long lines. Preece moved to block publication of Heaviside’s work, with the result that the paper on distortionless telephony, published in The Electrician, was largely ignored. It was not until 1897 that AT&T in the United States commissioned a study of Heaviside’s work, leading to patents eventually worth millions. The credit, and financial reward, went to Professor Michael Pupin of Columbia University, who became another of Heaviside’s life-long enemies.

You might wonder why what seems such a simple result (which can be written in modern notation as the equation L/R = C/G) which had such immediate technological utlilty eluded so many people for so long (recall that the problem with slow transmission on the transatlantic cable had been observed since the 1850s). The reason is the complexity of Maxwell’s theory and the formidably difficult notation in which it was expressed. Oliver Heaviside spent nine years fully internalising the theory and its implications, and he was one of only a handful of people who had done so and, perhaps, the only one grounded in practical applications such as telegraphy and telephony. Concurrent with his work on transmission line theory, he invented the mathematical field of vector calculus and, in 1884, reformulated Maxwell’s original theory which, written in modern notation less cumbersome than that employed by Maxwell, looks like:

Maxwell's original 20 equations

into the four famous vector equations we today think of as Maxwell’s.

Maxwell's equations: modern vector calculus representation

These are not only simpler, condensing twenty equations to just four, but provide (once you learn the notation and meanings of the variables) an intuitive sense for what is going on. This made, for the first time, Maxwell’s theory accessible to working physicists and engineers interested in getting the answer out rather than spending years studying an arcane theory. (Vector calculus was independently invented at the same time by the American J. Willard Gibbs. Heaviside and Gibbs both acknowledged the work of the other and there was no priority dispute. The notation we use today is that of Gibbs, but the mathematical content of the two formulations is essentially identical.)

And, during the same decade of the 1880s, Heaviside invented the operational calculus, a method of calculation which reduces the solution of complicated problems involving differential equations to simple algebra. Heaviside was able to solve so many problems which others couldn’t because he was using powerful computational tools they had not yet adopted. The situation was similar to that of Isaac Newton who was effortlessly solving problems such as the brachistochrone using the calculus he’d invented while his contemporaries struggled with more cumbersome methods. Some of the things Heaviside did in the operational calculus, such as cancel derivative signs in equations and take the square root of a derivative sign made rigorous mathematicians shudder but, hey, it worked and that was good enough for Heaviside and the many engineers and applied mathematicians who adopted his methods. (In the 1920s, pure mathematicians used the theory of Laplace transforms to reformulate the operational calculus in a rigorous manner, but this was decades after Heaviside’s work and long after engineers were routinely using it in their calculations.)

Heaviside’s intuitive grasp of electromagnetism and powerful computational techniques placed him in the forefront of exploration of the field. He calculated the electric field of a moving charged particle and found it contracted in the direction of motion, foreshadowing the Lorentz-FitzGerald contraction which would figure in Einstein’s special relativity. In 1889 he computed the force on a point charge moving in an electromagnetic field, which is now called the Lorentz force after Hendrik Lorentz who independently discovered it six years later. He predicted that a charge moving faster than the speed of light in a medium (for example, glass or water) would emit a shock wave of electromagnetic radiation; in 1934 Pavel Cherenkov experimentally discovered the phenomenon, now called Cherenkov radiation, for which he won the Nobel Prize in 1958. In 1902, Heaviside applied his theory of transmission lines to the Earth as a whole and explained the propagation of radio waves over intercontinental distances as due to a transmission line formed by conductive seawater and a hypothetical conductive layer in the upper atmosphere dubbed the Heaviside layer. In 1924 Edward V. Appleton confirmed the existence of such a layer, the ionosphere, and won the Nobel prize in 1947 for the discovery.

Oliver Heaviside never won a Nobel Price, although he was nominated for the physics prize in 1912. He shouldn’t have felt too bad, though, as other nominees passed over for the prize that year included Hendrik Lorentz, Ernst Mach, Max Planck, and Albert Einstein. (The winner that year was Gustaf Dalén, “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys”—oh well.) He did receive Britain’s highest recognition for scientific achievement, being named a Fellow of the Royal Society in 1891. In 1921 he was the first recipient of the Faraday Medal from the Institution of Electrical Engineers.

Having never held a job between 1874 and his death in 1925, Heaviside lived on his irregular income from writing, the generosity of his family, and, from 1896 onward a pension of £120 per year (less than his starting salary as a telegraph operator in 1868) from the Royal Society. He was a proud man and refused several other offers of money which he perceived as charity. He turned down an offer of compensation for his invention of loading coils from AT&T when they refused to acknowledge his sole responsibility for the invention. He never married, and in his elder years became somewhat of a recluse and, although he welcomed visits from other scientists, hardly ever left his home in Torquay in Devon.

His impact on the physics of electromagnetism and the craft of electrical engineering can be seen in the list of terms he coined which are in everyday use: “admittance”, “conductance”, “electret”, “impedance”, “inductance”, “permeability”, “permittance”, “reluctance”, and “susceptance”. His work has never been out of print, and sparkles with his intuition, mathematical prowess, and wicked wit directed at those he considered pompous or lost in needless abstraction and rigor. He never sought the limelight and among those upon whose work much of our present-day technology is founded, he is among the least known. But as long as electronic technology persists, it is a monument to the life and work of Oliver Heaviside.

Mahon, Basil. The Forgotten Genius of Oliver Heaviside. Amherst, NY: Prometheus Books, 2017. ISBN 978-1-63388-331-4.

Like 18+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Electromagnetic Discovery May Demystify Quantum Mechanics

Here’s a press release from Q-Track on my discovery and publication… Hans

Physicists have long been troubled by the paradoxes and contradictions of quantum mechanics. Yesterday, a possible step forward appeared in the Philosophical Transactions of the Royal Society A. In a paper, “Energy velocity and reactive fields” [pay wall, free preprint], physicist Hans G. Schantz, presents a novel way of looking at electromagnetics that shows the deep tie between electromagnetics and the pilot wave interpretation of quantum mechanics.

Schantz offers a solution to wave-particle duality by arguing that electromagnetic fields and energy are distinct phenomena instead of treating them as two aspects of the same thing. “Fields guide energy” in Schantz’s view. “As waves interfere, they guide energy along paths that may be substantially different from the trajectories of the waves themselves.” Schantz’s entirely classical perspective appears remarkably similar to the “pilot-wave” theory of quantum mechanics.

Schantz’s approach to electromagnetic theory focuses on the balance between electric and magnetic energy. When there are equal amount of electric and magnetic energy, energy moves at the speed of light. As the ratio shifts away from an equal balance, energy slows down, coming to a rest in the limit of electrostatic or magnetic static fields. From this observation, Schantz derives a way to quantify the state of the electromagnetic field on a continuum between static and radiation fields, and ties this directly to the energy velocity.

“The fascinating result is that fields guide energy in a way exactly analogous to the way in which pilot waves guide particles in the Bohm-deBroglie theory,” Schantz explains. “Rather than an ad hoc approach to explain away the contradictions of quantum mechanics, pilot wave theory appears to be the natural application of classical electromagnetic ideas in the quantum realm.”

His solution to the “two slit” experiment that has perplexed generations of physicists?

“Fields behave like waves. When they interact with the two slits, they generate an interference pattern. The interference pattern guides a photon along a path to one of the screen. It’s not the photon interfering with itself. It’s the interfering waves guiding the photon.”

So which slit did the photon pass through?

“If the photon ends up on the left hand side of the screen, it went through the left slit. If it ends up on the right hand side of the screen, it went through the right slit. It really is that simple.”

Schantz applied these electromagnetic ideas to understand and explain how antennas work in his textbook, The Art and Science of Ultrawideband Antennas (Artech House 2015). He’s also co-founder and CTO of Q-Track Corporation, a company that applies near-field wireless to the challenging problem of indoor location. “There are things you can do with low-frequency long-wavelength signals that simply aren’t possible with conventional wireless systems,” Schantz explains. “Understanding how static or reactive energy transforms into radiation has direct applications to antenna design as well near-field wireless systems.”

Schantz chose an unconventional way of popularizing his ideas. “I was amazed that my electromagnetic perspective was not discovered and adopted over a hundred years ago. It was as if someone had deliberately suppressed the discovery, so I undertook to write a science fiction series based on that premise.” Schantz’s Hidden Truth series debuted in 2016, and he released the third volume in the series, The Brave and the Bold, in October.

Schantz’s next project is a popular treatment of his physics ideas. Edited by L. Jagi Lamplighter Wright, Schantz’s book Fields: The Once and Future Theory of Everything will appear in 2019.

Like 12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Book Review: Savrola

“Savrola” by Winston S. ChurchillIn 1897, the young (23 year old) Winston Churchill, on an ocean voyage from Britain to India to rejoin the army in the Malakand campaign of 1897, turned his pen to fiction and began this, his first and only novel. He set the work aside to write The Story of the Malakand Field Force, an account of the fighting and his first published work of non-fiction, then returned to the novel, completing it in 1898. It was serialised in Macmillan’s Magazine in that year. (Churchill’s working title, Affairs of State, was changed by the magazine’s editors to Savrola, the name of a major character in the story.) The novel was subsequently published as book under that title in 1900.

The story takes place in the fictional Mediterranean country of Laurania, where five years before the events chronicled here, a destructive civil war had ended with General Antonio Molara taking power as President and ruling as a dictator with the support of the military forces he commanded in the war. Prior to the conflict, Laurania had a long history as a self-governing republic, and unrest was growing as more and more of the population demanded a return to parliamentary rule. Molara announced that elections would be held for a restored parliament under the original constitution.

Then, on the day the writ ordering the election was to be issued, it was revealed that the names of more than half of the citizens on the electoral rolls had been struck by Molara’s order. A crowd gathered in the public square, on hearing this news, became an agitated mob and threatened to storm the President’s carriage. The officer commanding the garrison commanded his troops to fire on the crowd.

All was now over. The spirit of the mob was broken and the wide expanse of Constitution Square was soon nearly empty. Forty bodies and some expended cartridges lay on the ground. Both had played their part in the history of human development and passed out of the considerations of living men. Nevertheless, the soldiers picked up the empty cases, and presently some police came with carts and took the other things away, and all was quiet again in Laurania.

The massacre, as it was called even by the popular newspaper The Diurnal Gusher which nominally supported the Government, not to mention the opposition press, only compounded the troubles Molara saw in every direction he looked. While the countryside was with him, sentiment in the capital was strongly with the pro-democracy opposition. Among the army, only the élite Republican Guard could be counted on as reliably loyal, and their numbers were small. A diplomatic crisis was brewing with the British over Laurania’s colony in Africa which might require sending the Fleet, also loyal, away to defend it. A rebel force, camped right across the border, threatens invasion at any sign of Molara’s grip on the nation weakening. And then there is Savrola.

Savrola (we never learn his first name), is the young (32 years), charismatic, intellectual, and persuasive voice of the opposition. While never stepping across the line sufficiently to justify retaliation, he manages to keep the motley groups of anti-Government forces in a loose coalition and is a constant thorn in the side of the authorities. He was not immune from introspection.

Was it worth it? The struggle, the labour, the constant rush of affairs, the sacrifice of so many things that make life easy, or pleasant—for what? A people’s good! That, he could not disguise from himself, was rather the direction than the cause of his efforts. Ambition was the motive force, and he was powerless to resist it.

This is a character one imagines the young Churchill having little difficulty writing. With the seemingly incorruptible Savrola gaining influence and almost certain to obtain a political platform in the coming elections, Molara’s secretary, the amoral but effective Miguel, suggests a stratagem: introduce Savrola to the President’s stunningly beautiful wife Lucile and use the relationship to compromise him.

“You are a scoundrel—an infernal scoundrel” said the President quietly.

Miguel smiled, as one who receives a compliment. “The matter,” he said, “is too serious for the ordinary rules of decency and honour. Special cases demand special remedies.”

The President wants to hear no more of the matter, but does not forbid Miguel from proceeding. An introduction is arranged, and Lucile rapidly moves from fascination with Savrola to infatuation. Then events rapidly spin out of anybody’s control. The rebel forces cross the border; Molara’s army is proved unreliable and disloyal; the Fleet, en route to defend the colony, is absent; Savrola raises a popular rebellion in the capital; and open fighting erupts.

This is a story of intrigue, adventure, and conflict in the “Ruritanian” genre popularised by the 1894 novel The Prisoner of Zenda. Churchill, building on his experience of war reportage, excels in and was praised for the realism of the battle scenes. The depiction of politicians, functionaries, and soldiers seems to veer back and forth between cynicism and admiration for their efforts in trying to make the best of a bad situation. The characters are cardboard figures and the love interest is clumsily described.

Still, this is an entertaining read and provides a window on how the young Churchill viewed the antics of colourful foreigners and their unstable countries, even if Laurania seems to have a strong veneer of Victorian Britain about it. The ultimate message is that history is often driven not by the plans of leaders, whether corrupt or noble, but by events over which they have little control. Churchill never again attempted a novel and thought little of this effort. In his 1930 autobiography covering the years 1874 through 1902 he writes of Savrola, “I have consistently urged my friends to abstain from reading it.” But then, Churchill was not always right—don’t let his advice deter you; I enjoyed it.

This work is available for free as a Project Gutenberg electronic book in a variety of formats. There are a number of print and Kindle editions of this public domain text; I have cited the least expensive print edition available at the time I wrote this review. I read this Kindle edition, which has a few typographical errors due to having been prepared by optical character recognition (for example, “stem” where “stern” was intended), but is otherwise fine.

One factlet I learned while researching this review is that “Winston S. Churchill” is actually a nom de plume. Churchill’s full name is Winston Leonard Spencer-Churchill, and he signed his early writings as “Winston Churchill”. Then, he discovered there was a well-known American novelist with the same name. The British Churchill wrote to the American Churchill and suggested using the name “Winston Spencer Churchill” (no hyphen) to distinguish his work. The American agreed, noting that he would also be willing to use a middle name, except that he didn’t have one. The British Churchill’s publishers abbreviated his name to “Winston S. Churchill”, which he continued to use for the rest of his writing career.

Churchill, Winston S. Savrola. Seattle: CreateSpace, [1898, 1900] 2018. ISBN 978-1-7271-2358-6.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

White Women

I have noticed several stories over the past few weeks about white women voters. Also, I have seen fewer articles that appeared to be attacks on Evangelicals. In the heat of the 2018 campaigns, Big Media seems to have given up on wedging Evangelicals away from President Trump. They are now aiming at white women. They want to cry and bully and shame and scare enough white women to boost some sorry Democrats into office.

This is not so much a matter of articles. It has been a spate of editorials. The trick is that many of these editorials are hiding out in news pages instead of on opinion pages. When you are looking online, it is frequently hard to tell that you are clicking on an editorial until you are several paragraphs into the work. Big Media has quit separating news from opinion, and they have the idea that theirs are the only acceptable opinions.

White women, here are a few examples. In media eyes, you are the oppressed and you don’t even know it, or else you are determined to retain the privilege that comes from belonging to your class. You are either hateful racists or else clueless drones. Here is an example from a Leftist site:

https://www.thenation.com/article/the-reasons-why-white-women-vote-republican-and-what-to-do-about-it/

To understand the “white woman story,” we must first acknowledge that white supremacy remains the prevailing force in electoral politics….

There was a 20-point gap in support for Hillary Clinton between college-educated (56 percent) and non-college-educated (36 percent) white women in 2016. But there was also significant within-group variation, with support for Clinton 10+ points higher among unmarried women than married women and roughly 30 points higher among non-evangelicals than evangelical Christians across all educational levels. Such associations are significant because they reveal how systemic influences like marriage and evangelical Christianity interact with white supremacy to influence white women’s political behavior, through the explicit ideologies they propagate and the more insidious ways they reflect and perpetuate other structural inequalities. Some white women face voting pressure from their more-conservative husbands, a dynamic Hillary Clinton acknowledged in her analysis of her 2016 election loss.

I read parts of that article to Snooks.

Gee, I’m a married Evangelical Christian white woman.”

Well, Honey, you are just a slave of the Patriarchy.’

No; “I’m Queen. It’s good to be Queen.”

__  __  __  __  __  __  __  __  __  __  __  __

Julie Kelly had a post last week at American Greatness, to take down some of this nonsense (Oct 8).

https://amgreatness.com/2018/10/08/dems-midterm-message-white-women-are-rape-apologists/

White Republican women’s support for Kavanaugh unleashed a collective primal” scream  (again) from Democratic women over the weekend once it was clear he would be confirmed.”

Ms. Kelly cited some of the more high-profile examples of accusations of white supremacy, and then the way Leftist women turned on Susan Collins as a “rape apologist.” She gave particulars from atrocious editorials by Alexis Grenell in the New York Times and by Lucia Graves in The Guardian. Ms. Kelly mocked them with ridicule that would make the Happy Warriors proud. Then she began wrapping up with this:

It’s hard to see the value of the Democratic Party picking a fight with the largest voting demographic four weeks before a crucial election. But the tactic is obvious: Democrats cannot sway white women based on their ideas for the economy or national security or tax policy, so they’re left with coercion and intimidation. They want to shame white women voters into electing more Democrats by implying if we vote for Republicans, we are enabling and empowering rapists.

__  __  __  __  __

The Hill ran an article that dove into polling. https://thehill.com/opinion/campaign/409796-understanding-the-white-women-thing

Trump got a smaller percentage of white women’s votes (52 percent) than did Mitt Romney in 2016 (56 percent), John McCain in 2008 (53 percent), or George W. Bush in 2004 (55 percent). The truth is, most white women vote for Republicans — and they have for a long time.

That article, by Jamal Simons, focuses on one particular progressive project aimed at white women:

…GALvanize USA, an organization focused on persuading more white women to make common cause with the progressive base on Election Day.

The group has been conducting research and testing strategies in four key states: Maine, Michigan, Washington and Iowa. Their work won’t have much impact in the 2018 election cycle but they hope to learn enough lessons this year and next to move some of these voters in the 2020 presidential cycle.

GALvanize is working from internal data that identify white women as the largest block of persuadable voters. Their husbands, brothers and sons are often far more conservative and, in many of the big swing states, white voters make up a larger proportion of the electorate.

__  __  __  __  __  __  __  __

The Atlanta Journal-Constitution ran a poll in Georgia shortly after the Kavanaugh Confirmation. They were horrified at the results. https://politics.myajc.com/blog/politics/white-women-voters-are-sticking-not-just-with-kemp-but-trump-too/tG6ypbHIcNNUJW2tL4MGYL/

In the race for governor, Kemp [a white man] leads Abrams [a black woman] 48 to 46 percent, a statistically insignificant difference given the poll’s margin of error of 2.8 percentage points. Raising the first eyebrow is the fact that Abrams has the support of only a bare majority of female voters – 50.4 percent.

Separating out white women voters, the margin of error jumps quite a bit – given that the subgroup is smaller, but the general trend is clear. Kemp gets 69 percent of the white female vote, compared to 27 for Abrams.

__  __  __  __  __  __  __

Slate ran an editorial that told white women to shut up and follow the lead of women of color:

http://www.statepress.com/article/2018/10/spopinion-white-feminism-allyship

__  __  __  __  __  __ __

I saw Emmitt Till and the Scottsboro Boys dredged up in anti-white editorials. The hatred poured out on white women who don’t toe the Progressive line was really impressive in the depths of the emotional content on display.

I also kept seeing the statement that 52 percent of white women had “voted against their own self-interests” by voting for President Trump. I saw that at Skirt and at USA Today and a couple of other places. Because, I suppose, all women want unlimited abortion, and all women want expanded entitlement programs, all women want to fully restore Obamacare, and all women want open borders, and women don’t care about jobs or government overreach or individual liberties.

As far as Leftists are concerned, there is only one set of acceptable positions for women to take, and women who voted otherwise did so because they are oppressed by their menfolk.

__  __  __  __  __  __  __

There was a round of excitement in the Progressive ranks when Taylor Smith endorsed Democrat Phil Bredesen for Governor of Tennessee, over Congresswoman Marsha Blackburn. For example, Yahoo News and Google News both featured this column from GQ, by Mari Uyehara:

https://www.yahoo.com/lifestyle/taylor-swift-political-awakening-america-180558458.html

Swift has mostly shied away from politics, so much so that she’s been lambasted for it in the past. But her call to action this week seemed to portend something greater rumbling below ground: the political awakening of America’s once politically neutral white women. 

Politically neutral? No, just politically divided, as noted above in that article from The Hill. The fact that women are divided between the parties does not mean that women are neutral. Maybe they are just thinking that women on opposite sides cancel out each others’ votes? I thought that was a pretty meaningless comment, but on first reading, it sort of sounded like something, maybe.

But what she is really getting at is that women who voted for Trump just were not thinking right.

__  __  __  __  __  __  __

Relatively high in a Google search is an editorial that Time ran on October 4, by the emotionally troubled author of The Vagina Monologues. There is no need to hear more from her, but here is the link:

http://time.com/5415254/white-women-brett-kavanaugh-donald-trump/

__  __  __  __  __  __   __

The Atlantic ran a column by Neil J. Young, “Here’s Why White Women Are Abandoning the GOP.” His evidence is a Sept. 26 LA Times poll. He blames the historic slight majority of white women who voted for the GOP on their class consciousness and on security issues. He says Republicans are now hurting with white women because the Democrats have embraced the #metoo movement and established a bright line opposing sexual misconduct, and cites the cases of Al Franken and John Conyers. I thought the whole thing sounded mostly like wishful thinking:

https://www.theatlantic.com/ideas/archive/2018/10/will-white-suburban-women-vote-republican-november/571720/

National Review noticed this phenomenon and ran this article by Kyle Smith:

https://www.nationalreview.com/2018/10/white-women-becomes-a-disparaging-term/

One of several items cited was this:

A writer for  The Root castigated Taylor Swift because “like some white women, she uses her privilege to not be involved until she’s directly affected.”

The unnamed writer, Michael Herriott, took umbrage at this, and replied with an even more bile-filled racist screed. It is not worth discussing, but it possibly could serve as an example of race hatred:

https://www.theroot.com/a-requiem-for-white-women-which-the-national-review-sa-1829747262

 

Marriage Gap

I think the Leftists are lazy and comfortable in their groupthink. They see a racial divide where there are other factors at work.

Marriage is key. Marital status explains the divide at least as well as race, and I think better. White women who vote Republican tend to be married women.

Blaming white women for gross racism is lazy and feeds their stupid narrative. They blame the difference on racism, when there is clearly a basic difference in perspective at work.

This is not a secret. It is widely known. They choose to avoid it, because it does not serve to advance their narrative. Even when they acknowledge it, it is a quick wink and then on to other stuff. Buried in a mountain of Leftist stuff, NPR ran this very brief piece:

https://www.npr.org/2018/09/26/651710680/women-voters-marriage-gap-and-the-midterms

NPR really liked what they heard from a Democrat pollster who says she sees signs that married women are leaving the Republican ranks, so they did a follow-up:

https://www.npr.org/2018/10/02/651439266/married-women-may-be-moving-away-from-the-gop

That article is focused on Michigan. I noticed that the same woman is featured as a woman-on-the-street in both articles. That makes me hopefully think that the whole thing may be wishful thinking. It is mostly based on disapproval ratings for President Trump. That may or may not make a difference on election day.

Likewise the marriage gap is at work when the broader subject of all voters is considered. Blacks are married at a much lower rate than whites. I think that has a lot to do with both the economic conditions of blacks and their voting habits.

Republicans have become the party of married people. This is because of Democrat hostility to traditional marriage. The GOP should be running against the Democrats as the party that breaks up families.

Western civilization is at stake.


Users who have liked this post:

  • avatar
  • avatar
  • avatar

Book Review: SJWs Always Double Down

“SJWs Always Double Down” by Vox DayIn SJWs Always Lie Vox Day introduced a wide audience to the contemporary phenomenon of Social Justice Warriors (SJWs), collectivists and radical conformists burning with the fierce ardour of ignorance who, flowing out of the academic jackal bins where they are manufactured, are infiltrating the culture: science fiction and fantasy, comic books, video games; and industry: technology companies, open source software development, and more established and conventional firms whose managements have often already largely bought into the social justice agenda.

The present volume updates the status of the Cold Civil War a couple of years on, recounts some key battles, surveys changes in the landscape, and provides concrete and practical advice to those who wish to avoid SJW penetration of their organisations or excise an infiltration already under way.

Two major things have changed since 2015. The first, and most obvious, is the election of Donald Trump as President of the United States in November, 2016. It is impossible to overstate the significance of this. Up until the evening of Election Day, the social justice warriors were absolutely confident they had won on every front and that all that remained was to patrol the battlefield and bayonet the wounded. They were ascendant across the culture, in virtually total control of academia and the media, and with the coronation of Hillary Clinton, positioned to tilt the Supreme Court to discover the remainder of their agenda emanating from penumbras in the living Constitution. And then—disaster! The deplorables who inhabit the heartland of the country, those knuckle-walking, Bible-thumping, gun-waving bitter clingers who produce just about every tangible thing still made in the United States up and elected somebody who said he’d put them—not the coastal élites, ivory tower professors and think tankers, “refugees” and the racket that imports them, “undocumented migrants” and the businesses that exploit their cheap labour, and all the rest of the parasitic ball and chain a once-great and productive nation has been dragging behind it for decades—first.

The shock of this event seems to have jolted a large fraction of the social justice warriors loose from their (already tenuous) moorings to reality. “What could have happened?”, they shrieked, “It must have been the Russians!” Overnight, there was the “resistance”, the rampage of masked violent street mobs, while at the same time SJW leaders in the public eye increasingly dropped the masks behind which they’d concealed their actual agenda. Now we have candidates for national office from the Democrat party, such as bug-eyed SJW Alexandria Occasional-Cortex openly calling themselves socialists, while others chant “no borders” and advocate abolishing the federal immigration and customs enforcement agency. What’s the response to deranged leftists trying to gun down Republican legislators at a baseball practice and assaulting a U.S. Senator while mowing the lawn of his home? The Democrat candidate who lost to Trump in 2016 says, “You cannot be civil with a political party that wants to destroy what you stand for, what you care about.”, and the attorney general, the chief law enforcement officer of the administration which preceded Trump in office said, “When they go low, we kick them. That’s what this new Democratic party is about.”

In parallel with this, the SJW convergence of the major technology and communication companies which increasingly dominate the flow of news and information and the public discourse: Google (and its YouTube), Facebook, Twitter, Amazon, and the rest, previously covert, has now become explicit. They no longer feign neutrality to content, or position themselves as common carriers. Now, they overtly put their thumb on the scale of public discourse, pushing down conservative and nationalist voices in search rankings, de-monetising or banning videos that oppose the slaver agenda, “shadow banning” dissenting voices or terminating their accounts entirely. Payment platforms and crowd-funding sites enforce an ideological agenda and cut off access to those they consider insufficiently on board with the collectivist, globalist party line. The high tech industry, purporting to cherish “diversity”, has become openly hostile to anybody who dares dissent: firing them and blacklisting them from employment at other similarly converged firms.

It would seem a dark time for champions of liberty, believers in reward for individual merit rather than grievance group membership, and other forms of sanity which are now considered unthinkable among the unthinking. This book provides a breath of fresh air, a sense of hope, and practical information to navigate a landscape populated by all too many non-playable characters who imbibe, repeat, and enforce the Narrative without questioning or investigating how it is created, disseminated in a co-ordinated manner across all media, and adjusted (including Stalinist party-line overnight turns on a dime) to advance the slaver agenda.

Vox Day walks through the eight stages of SJW convergence of an organisation from infiltration through evading the blame for the inevitable failure of the organisation once fully converged, illustrating the process with real-world examples and quotes from SJWs and companies infested with them. But the progression of the disease is not irreversible, and even if it is not arrested, there is still hope for the industry and society as a whole (not to minimise the injury and suffering inflicted on innocent and productive individuals in the affected organisations).

An organisation, whether a company, government agency, or open source software project, only comes onto the radar of the SJWs once it grows to a certain size and achieves a degree of success carrying out the mission for which it was created. It is at this point that SJWs will seek to penetrate the organisation, often through the human resources department, and then reinforce their ranks by hiring more of their kind. SJWs flock to positions in which there is no objective measure of their performance, but instead evaluations performed, as their ranks grow, more and more by one another. They are not only uninterested in the organisation’s mission (developing a product, providing a service, etc.), but unqualified and incapable of carrying it out. In the words of Jerry Pournelle’s Iron Law of Bureaucracy, they are not “those who are devoted to the goals of the organization” (founders, productive mission-oriented members), but “those dedicated to the organization itself”. “The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.”

Now, Dr Pournelle was describing a natural process of evolution in all bureaucratic organisations. SJW infection simply accelerates the process and intensifies the damage, because SJWs are not just focused on the organisation as opposed to its mission, but have their own independent agenda and may not care about damage to the institution as long as they can advance the Narrative.

But this is a good thing. It means that, in a competitive market, SJW afflicted organisations will be at a disadvantage compared to those which have resisted the corruption or thrown it off. It makes inflexible, slow-moving players with a heavy load of SJW parasites vulnerable to insurgent competitors, often with their founders still in charge, mission-focused and customer-oriented, who hire, promote, and reward contributors solely based on merit and not “diversity”, “inclusion”, or any of the other SJW shibboleths mouthed by the management of converged organisations. (I remember, when asked about my hiring policy in the 1980s, saying “I don’t care if they hang upside down from trees and drink blood. If they’re great programmers, I’ll hire them.”)

A detailed history of GamerGate provides a worked example of how apparent SJW hegemony within a community can be attacked by “weaponised autism” (as Milo Yiannopoulos said, “it’s really not wise to take on a collection of individuals whose idea of entertainment is to spend hundreds of hours at a highly repetitive task, especially when their core philosophy is founded on the principle that if you are running into enemies and taking fire, you must be going the right way”). Further examples show how these techniques have been applied within the world of science fiction and fantasy fandom, comic books, and software development. The key take-away is that any SJW converged organisation or community is vulnerable to concerted attack because SJWs are a parasite that ultimately kills its host. Create an alternative and relentlessly attack the converged competition, and victory is possible. And remember, “Victory is not positive PR. Victory is when your opponent quits.”

This is a valuable guide, building upon SJWs Always Lie (which you should read first), and is essential for managers, project leaders, and people responsible for volunteer organisations who want to keep them focused on the goals for which they were founded and protected from co-optation by destructive parasites. You will learn how seemingly innocent initiatives such as adoption of an ambiguously-worded Code of Conduct or a Community Committee can be the wedge by which an organisation can be subverted and its most productive members forced out or induced to walk away in disgust. Learning the lessons presented here can make the difference between success and, some dismal day, gazing across the cubicles at a sea of pinkhairs and soybeards and asking yourself, “Where did we go wrong?”

The very fact that SJW behaviour is so predictable makes them vulnerable. Because they always double down, they can be manipulated into marginalising themselves, and it’s often child’s play to set traps into which they’ll walk. Much of their success to date has been due to the absence of the kind of hard-edged opposition, willing to employ their own tactics against them, that you’ll see in action here and learn to use yourself. This is not a game for the “defeat with dignity” crowd who were, and are, appalled by Donald Trump’s plain speaking, or those who fail to realise that proclaiming “I won’t stoop to their level” inevitably ends up with “Bend over”. The battles, and the war can be won, but to do so, you have to fight. Here is a guide to closing with the enemy and destroying them before they ruin everything we hold sacred.

Day, Vox [Theodore Beale]. SJWs Always Double Down. Kouvola, Finland: Castalia House, 2017. ISBN 978-952-7065-19-8.

Like 20+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Fred Cole the Defeatist

I don’t know this person very well but what I have seen, he is no friend of conservatives. He is not someone that wants things to go best for certain branches of the coalition. He open scoffs at people and gets away with it. (Hollywood values, right?) He works for dissension and dissension he gets. He has put out so many trollish posts that only someone with extreme myopia and reading comprehension issues would not be able to see it. (That must be why they shut down those who do.)

I am writing this thinking of the danger of those who thought of following Fred Cole’s advice. He would have had Hillary president with a 6-3 balance on the Supreme Court. Just think of that, Mr Libertarian getting all that liberalism because he couldn’t see what a lot of others saw. I am not saying he is blind. It is the walls that move to constantly hit him.

I remember he wrote a post asking the Clueless, “What are you going to do when Trump loses?” Well, he didn’t lose. And the Fred Cole still writes as if he should be listened to.

Ratburger is a great place. A place where a public person like Fred Cole can be dealt with without the thread being shut down. Free expression is not had at $5 a month. It is priceless.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

TOTD 2018-10-8: The Next Step

In life it is easy to see a problem. Complainers are a dime a dozen. (Fill in your coin of choice if that offends you.) Few people take the next step and work toward positive change. Sadly I see people stuck for forty years in a wilderness murmuring and never take the next steps. These people walk into rooms and bring their own dimmer switches.

I appreciate that many on this site are “Next Step” types. They are people who are able to complain but also able to make a difference. Someone just put up a post about victims. Becoming a victim is terrible but to remain a victim is even worse.  Strangely victims take comfort in their problems. “See how good I am because someone is bad to me.” It is another way of not taking responsibility. “It’s not my fault.”

I like to think of America as the land of the “Next Step”.  People of Europe decided to stop being victims and took a boat ride for a chance at something better. These people unleashed creativity that was pent up. One of the greatest moments was when they saw that gift from France that lit a harbor.


Users who have liked this post:

  • avatar
  • avatar
  • avatar

Saturday Night Science: Life After Google

“Life after Google” by George GilderIn his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.

Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.

The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.

This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.

As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.

The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers’ sites but did not publish themselves).

While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.

All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody’s data and resell their computing power and access at attractive prices.

As a example of the magnitude of the economies of scale we’re talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon’s making money on the deal.

This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.

As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we’re all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.

The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user’s behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.

The fundamental flaw in Karl Marx’s economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author’s view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it’s easier to understand the politics of the barons of Silicon Valley.

The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I’m not going there), but it’s profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I’ll discuss that later.) As with physical damage, top-down censorship does not work because there’s no top.

But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators’ revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn’t it great!’ ” Today there’s a top, and those on top control the speech of everything that flows through their data silos.

This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users’ most precious possessions: their time and attention) were in large part the consequence of the Internet’s lacking three fundamental architectural layers: security, trust, and transactions. Let’s explore them.

Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today’s Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it’s bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users’ security.

Trust. As indispensable as security is knowing to whom you’re talking. For example, when you connect to your bank’s Web site, how do you know you’re actually talking to their server and not some criminal whose computer has spoofed your computer’s domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it’s vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.

Transactions. Business is all about transactions; if you aren’t doing transactions, you aren’t in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user’s “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they’d have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.

So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users’ personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we’ve put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.

At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.

Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:

As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you’ll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they’ll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:

This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it’s way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn’t always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.

There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that’s so it simply means we’ll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.

He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don’t get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn’t much difference between a very slowly increasing supply and a static one. If you’re on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn’t matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.

Gilder further argues, as he did in The Scandal of Money, that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I’ll take his word on the latter point; it’s his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.

Gilder’s books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn’t make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.

Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.

Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.

This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.

Like 12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar