SpaceX CRS-16 Landing Failure

Yesterday, 2018-12-05, SpaceX successfully launched a Dragon spacecraft from Cape Canaveral to deliver more than 2500 kg of cargo to the International Space Station (ISS).  The Dragon spacecraft (apart from its disposable “trunk” section) was previously flown on the CRS-10 mission to the ISS in February 2017.  The Falcon 9 booster was new, on its first flight.  Here is a video of the launch, starting at 15 seconds before liftoff through deployment of the Dragon’s solar panels.

The primary mission was delivery of the Dragon to an orbit to rendezvous with the ISS, and was entirely successful.  SpaceX intended to recover the first stage booster for subsequent re-use (it is a “Block 5” model, designed to fly as many as ten times with minimal refurbishment between launches) back at the landing zone at Cape Canaveral.  This involves, after separating the second stage, flipping the first stage around, firing three engines in a boost-back burn to cancel its downrange velocity and direct it back toward the Cape, a three engine re-entry burn to reduce its velocity before it enters the dense atmosphere, and a single engine landing burn to touch down.

Everything went well with the landing through the re-entry burn.  As the first stage encountered the atmosphere, it began to roll out of control around its long axis.  The “grid fins” which extend from the first stage to provide aerodynamic control, were not observed to move as they should to counter the roll moment.  As the roll began to go all Kerbal, the feed from the first stage was cut in the SpaceX launch coverage in the video above.

In the post-launch press conference, Hans Koenigsmann, Vice President of Build and Flight Reliability at SpaceX, showed a video which picks up at the moment the feed was cut and continues through the first stage’s landing off the coast of Cape Canaveral.  He describes how the safety systems deliberately target a water landing and only shift the landing point to the landing pad (or drone ship) once confident everything is working as intended.

Here is a video taken from the shore which shows the final phase of the first stage’s braking and water landing.  Note how the spin was arrested at the last instant before touchdown.

In this video, Everyday Astronaut Tim Dodd explains the first stage recovery sequence and what appears to have gone wrong, based upon tweets from Elon Musk after the landing.

After splashing down, the first stage completed all of its safing procedures, allowing a recovery ship to approach it and tow it back to port.  SpaceX has said it will be inspected and, if judged undamaged by the water landing, may be re-flown on a SpaceX in-house mission (but not for a paying customer).

The most likely cause of the accident is failure of the hydraulic pump that powers the grid fins.  In the present design, there is only one pump, so there is no redundancy.  This may be changed to include a second pump, so a single pump failure can be tolerated.

Like 15+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: Apollo 8 Fifty Years Ago

Apollo 8 EarthriseAs the tumultuous year 1968 drew to a close, NASA faced a serious problem with the Apollo project. The Apollo missions had been carefully planned to test the Saturn V booster rocket and spacecraft (Command/Service Module [CSM] and Lunar Module [LM]) in a series of increasingly ambitious missions, first in low Earth orbit (where an immediate return to Earth was possible in case of problems), then in an elliptical Earth orbit which would exercise the on-board guidance and navigation systems, followed by lunar orbit, and finally proceeding to the first manned lunar landing. The Saturn V had been tested in two unmanned “A” missions: Apollo 4 in November 1967 and Apollo 6 in April 1968. Apollo 5 was a “B” mission, launched on a smaller Saturn 1B booster in January 1968, to test an unmanned early model of the Lunar Module in low Earth orbit, primarily to verify the operation of its engines and separation of the descent and ascent stages. Apollo 7, launched in October 1968 on a Saturn 1B, was the first manned flight of the Command and Service modules and tested them in low Earth orbit for almost 11 days in a “C” mission.

Apollo 8 was planned to be the “D” mission, in which the Saturn V, in its first manned flight, would launch the Command/Service and Lunar modules into low Earth orbit, where the crew, commanded by Gemini veteran James McDivitt, would simulate the maneuvers of a lunar landing mission closer to home. McDivitt’s crew was trained and ready to go in December 1968. Unfortunately, the lunar module wasn’t. The lunar module scheduled for Apollo 8, LM-3, had been delivered to the Kennedy Space Center in June of 1968, but was, to put things mildly, a mess. Testing at the Cape discovered more than a hundred serious defects, and by August it was clear that there was no way LM-3 would be ready for a flight in 1968. In fact, it would probably slip to February or March 1969. This, in turn, would push the planned “E” mission, for which the crew of commander Frank Borman, command module pilot James Lovell, and lunar module pilot William Anders were training, aimed at testing the Command/Service and Lunar modules in an elliptical Earth orbit venturing as far as 7400 km from the planet and originally planned for March 1969, three months later, to June, delaying all subsequent planned missions and placing the goal of landing before the end of 1969 at risk.

But NASA were not just racing the clock—they were also racing the Soviet Union. Unlike Apollo, the Soviet space program was highly secretive and NASA had to go on whatever scraps of information they could glean from Soviet publications, the intelligence community, and independent tracking of Soviet launches and spacecraft in flight. There were, in fact, two Soviet manned lunar programmes running in parallel. The first, internally called the Soyuz 7K-L1 but dubbed “Zond” for public consumption, used a modified version of the Soyuz spacecraft launched on a Proton booster and was intended to carry two cosmonauts on a fly-by mission around the Moon. The craft would fly out to the Moon, use its gravity to swing around the far side, and return to Earth. The Zond lacked the propulsion capability to enter lunar orbit. Still, success would allow the Soviets to claim the milestone of first manned mission to the Moon. In September 1968 Zond 5 successfully followed this mission profile and safely returned a crew cabin containing tortoises, mealworms, flies, and plants to Earth after their loop around the Moon. A U.S. Navy destroyer observed recovery of the re-entry capsule in the Indian Ocean. Clearly, this was preparation for a manned mission which might occur on any lunar launch window.

(The Soviet manned lunar landing project was actually far behind Apollo, and would not launch its N1 booster on that first, disastrous, test flight until February 1969. But NASA did not know this in 1968.) Every slip in the Apollo program increased the probability of its being scooped so close to the finish line by a successful Zond flyby mission.

These were the circumstances in August 1968 when what amounted to a cabal of senior NASA managers including George Low, Chris Kraft, Bob Gilruth, and later joined by Wernher von Braun and chief astronaut Deke Slayton, began working on an alternative. They plotted in secret, beneath the radar and unbeknownst to NASA administrator Jim Webb and his deputy for manned space flight, George Mueller, who were both out of the country, attending an international conference in Vienna. What they were proposing was breathtaking in its ambition and risk. They envisioned taking Frank Borman’s crew, originally scheduled for Apollo 9, and putting them into an accelerated training program to launch on the Saturn V and Apollo spacecraft currently scheduled for Apollo 8. They would launch without a Lunar Module, and hence be unable to land on the Moon or test that spacecraft. The original idea was to perform a Zond-like flyby, but this was quickly revised to include going into orbit around the Moon, just as a landing mission would do. This would allow retiring the risk of many aspects of the full landing mission much earlier in the program than originally scheduled, and would also allow collection of precision data on the lunar gravitational field and high resolution photography of candidate landing sites to aid in planning subsequent missions. The lunar orbital mission would accomplish all the goals of the originally planned “E” mission and more, allowing that mission to be cancelled and therefore not requiring an additional booster and spacecraft.

But could it be done? There were a multitude of requirements, all daunting. Borman’s crew, training toward a launch in early 1969 on an Earth orbit mission, would have to complete training for the first lunar mission in just sixteen weeks. The Saturn V booster, which suffered multiple near-catastrophic engine failures in its second flight on Apollo 6, would have to be cleared for its first manned flight. Software for the on-board guidance computer and for Mission Control would have to be written, tested, debugged, and certified for a lunar mission many months earlier than previously scheduled. A flight plan for the lunar orbital mission would have to be written from scratch and then tested and trained in simulations with Mission Control and the astronauts in the loop. The decision to fly Borman’s crew instead of McDivitt’s was to avoid wasting the extensive training the latter crew had undergone in LM systems and operations by assigning them to a mission without an LM. McDivitt concurred with this choice: while it might be nice to be among the first humans to see the far side of the Moon with his own eyes, for a test pilot the highest responsibility and honour is to command the first flight of a new vehicle (the LM), and he would rather skip the Moon mission and fly later than lose that opportunity. If the plan were approved, Apollo 8 would become the lunar orbit mission and the Earth orbit test of the LM would be re-designated Apollo 9 and fly whenever the LM was ready.

While a successful lunar orbital mission on Apollo 8 would demonstrate many aspects of a full lunar landing mission, it would also involve formidable risks. The Saturn V, making only its third flight, was coming off a very bad outing in Apollo 6 whose failures might have injured the crew, damaged the spacecraft hardware, and precluded a successful mission to the Moon. While fixes for each of these problems had been implemented, they had never been tested in flight, and there was always the possibility of new problems not previously seen.

The Apollo Command and Service modules, which would take them to the Moon, had not yet flown a manned mission and would not until Apollo 7, scheduled for October 1968. Even if Apollo 7 were a complete success (which was considered a prerequisite for proceeding), Apollo 8 would be only the second manned flight of the Apollo spacecraft, and the crew would have to rely upon the functioning of its power generation, propulsion, and life support systems for a mission lasting six days. Unlike an Earth orbit mission, if something goes wrong en route to or returning from the Moon, you can’t just come home immediately. The Service Propulsion System on the Service Module would have to work perfectly when leaving lunar orbit or the crew would be marooned forever or crash on the Moon. It would only have been tested previously in one manned mission and there was no backup (although the single engine did incorporate substantial redundancy in its design).

The spacecraft guidance, navigation, and control system and its Apollo Guidance Computer hardware and software, upon which the crew would have to rely to navigate to and from the Moon, including the critical engine burns to enter and leave lunar orbit while behind the Moon and out of touch with Mission Control, had never been tested beyond Earth orbit.

The mission would go to the Moon without a Lunar Module. If a problem developed en route to the Moon which disabled the Service Module (as would happen to Apollo 13 in April 1970), there would be no LM to serve as a lifeboat and the crew would be doomed.

When the high-ranking conspirators presented their audacious plan to their bosses, the reaction was immediate. Manned spaceflight chief Mueller immediately said, “Can’t do that! That’s craziness!” His boss, administrator James Webb, said “You try to change the entire direction of the program while I’m out of the country?” Mutiny is a strong word, but this seemed to verge upon it. Still, Webb and Mueller agreed to meet with the lunar cabal in Houston on August 22. After a contentious meeting, Webb agreed to proceed with the plan and to present it to President Johnson, who was almost certain to approve it, having great confidence in Webb’s management of NASA. The mission was on.

It was only then that Borman and his crewmembers Lovell and Anders learned of their reassignment. While Anders was disappointed at the prospect of being the Lunar Module Pilot on a mission with no Lunar Module, the prospect of being on the first flight to the Moon and entrusted with observation and photography of lunar landing sites more than made up for it. They plunged into an accelerated training program to get ready for the mission.

NASA approached the mission with its usual “can-do” approach and public confidence, but everybody involved was acutely aware of the risks that were being taken. Susan Borman, Frank’s wife, privately asked Chris Kraft, director of Flight Operations and part of the group who advocated sending Apollo 8 to the Moon, with a reputation as a plain-talking straight shooter, “I really want to know what you think their chances are of coming home.” Kraft responded, “You really mean that, don’t you?” “Yes,” she replied, “and you know I do.” Kraft answered, “Okay. How’s fifty-fifty?” Those within the circle, including the crew, knew what they were biting off.

The launch was scheduled for December 21, 1968. Everybody would be working through Christmas, including the twelve ships and thousands of sailors in the recovery fleet, but lunar launch windows are set by the constraints of celestial mechanics, not human holidays. In November, the Soviets had flown Zond 6, and it had demonstrated the “double dip” re-entry trajectory required for human lunar missions. There were two system failures which killed the animal test subjects on board, but these were covered up and the mission heralded as a great success. From what NASA knew, it was entirely possible the next launch would be with cosmonauts bound for the Moon.

Space launches were exceptional public events in the 1960s, and the first flight of men to the Moon, just about a hundred years after Jules Verne envisioned three men setting out for the Moon from central Florida in a “cylindro-conical projectile” in De la terre à la lune (From the Earth to the Moon), similarly engaging the world, the launch of Apollo 8 attracted around a quarter of a million people to watch the spectacle in person and hundreds of millions watching on television both in North America and around the globe, thanks to the newfangled technology of communication satellites.  Let’s tune in to CBS television and relive this singular event with Walter Cronkite.  (For one of those incomprehensible reasons in the Internet of Trash, this video, for which YouTube will happily generate an embed code, fails to embed in WordPress.  You’ll have to click the link below to view it.)

CBS coverage of the Apollo 8 launch

Now we step inside Mission Control and listen in on the Flight Director’s audio loop during the launch, illustrated with imagery and simulations.

The Saturn V performed almost flawlessly. During the second stage burn mild pogo oscillations began but, rather than progressing to the point where they almost tore the rocket apart as had happened on the previous Saturn V launch, von Braun’s team’s fixes kicked in and seconds later Borman reported, “Pogo’s damping out.” A few minutes later Apollo 8 was in Earth orbit.

Jim Lovell had sixteen days of spaceflight experience across two Gemini missions, one of them Gemini 7 where he endured almost two weeks in orbit with Frank Borman. Bill Anders was a rookie, on his first space flight. Now weightless, all three were experiencing a spacecraft nothing like the cramped Mercury and Gemini capsules which you put on as much as boarded. The Apollo command module had an interior volume of six cubic metres (218 cubic feet, in the quaint way NASA reckons things) which may not seem like much for a crew of three, but in weightlessness, with every bit of space accessible and usable, felt quite roomy. There were five real windows, not the tiny portholes of Gemini, and plenty of space to move from one to another.

With all this roominess and mobility came potential hazards, some verging on slapstick, but, in space, serious nonetheless. NASA safety personnel had required the astronauts to wear life vests over their space suits during the launch just in case the Saturn V malfunctioned and they ended up in the ocean. While moving around the cabin to get to the navigation station after reaching orbit, Lovell, who like the others hadn’t yet removed his life vest, snagged its activation tab on a strut within the cabin and it instantly inflated. Lovell looked ridiculous and the situation comical, but it was no laughing matter. The life vests were inflated with carbon dioxide which, if released in the cabin, would pollute their breathing air and removal would use up part of a CO₂ scrubber cartridge, of which they had a limited supply on board. Lovell finally figured out what to do. After being helped out of the vest, he took it down to the urine dump station in the lower equipment bay and vented it into a reservoir which could be dumped out into space. One problem solved, but in space you never know what the next surprise might be.

The astronauts wouldn’t have much time to admire the Earth through those big windows. Over Australia, just short of three hours after launch, they would re-light the engine on the third stage of the Saturn V for the “trans-lunar injection” (TLI) burn of 318 seconds, which would accelerate the spacecraft to just slightly less than escape velocity, raising its apogee so it would be captured by the Moon’s gravity. After housekeeping (presumably including the rest of the crew taking off those pesky life jackets, since there weren’t any wet oceans where they were going) and reconfiguring the spacecraft and its computer for the maneuver, they got the call from Houston, “You are go for TLI.” They were bound for the Moon.

The third stage, which had failed to re-light on its last outing, worked as advertised this time, with a flawless burn. Its job was done; from here on the astronauts and spacecraft were on their own. The booster had placed them on a free-return trajectory. If they did nothing (apart from minor “trajectory correction maneuvers” easily accomplished by the spacecraft’s thrusters) they would fly out to the Moon, swing around its far side, and use its gravity to slingshot back to the Earth (as Lovell would do two years later when he commanded Apollo 13, although there the crew had to use the engine of the LM to get back onto a free-return trajectory after the accident).

Apollo 8 rapidly climbed out of the Earth’s gravity well, trading speed for altitude, and before long the astronauts beheld a spectacle no human eyes had glimpsed before: an entire hemisphere of Earth at once, floating in the inky black void. On board, there were other concerns: Frank Borman was puking his guts out and having difficulties with the other end of the tubing as well. Borman had logged more than six thousand flight hours in his career as a fighter and test pilot, most of it in high-performance jet aircraft, and fourteen days in space on Gemini 7 without any motion sickness. Many people feel queasy when they experience weightlessness the first time, but this was something entirely different and new in the American space program. And it was very worrisome. The astronauts discussed the problem on private tapes they could downlink to Mission Control without broadcasting to the public, and when NASA got around to playing the tapes, the chief flight surgeon, Dr. Charles Berry, became alarmed.

As he saw it, there were three possibilities: motion sickness, a virus of some kind, or radiation sickness. On its way to the Moon, Apollo 8 passed directly through the Van Allen radiation belts, spending two hours in this high radiation environment, the first humans to do so. The total radiation dose was estimated as roughly the same as one would receive from a chest X-ray, but the composition of the radiation was different and the exposure was over an extended time, so nobody could be sure it was safe. The fact that Lovell and Anders had experienced no symptoms argued against the radiation explanation. Berry concluded that a virus was the most probable cause and, based upon the mission rules said, “I’m recommending that we consider canceling the mission.” The risk of proceeding with the commander unable to keep food down and possibly carrying a virus which the other astronauts might contract was too great in his opinion. This recommendation was passed up to the crew. Borman, usually calm and collected even by astronaut standards, exclaimed, “What? That is pure, unadulterated horseshit.” The mission would proceed, and within a day his stomach had settled.

This was the first case of space adaptation syndrome to afflict an American astronaut. (Apparently some Soviet cosmonauts had been affected, but this was covered up to preserve their image as invincible exemplars of the New Soviet Man.) It is now known to affect around a third of people experiencing weightlessness in environments large enough to move around, and spontaneously clears up in two to four (miserable) days.

The two most dramatic and critical events in Apollo 8’s voyage would occur on the far side of the Moon, with 3500 km of rock between the spacecraft and the Earth totally cutting off all communications. The crew would be on their own, aided by the computer and guidance system and calculations performed on the Earth and sent up before passing behind the Moon. The first would be lunar orbit insertion (LOI), scheduled for 69 hours and 8 minutes after launch. The big Service Propulsion System (SPS) engine (it was so big—twice as large as required for Apollo missions as flown—because it was designed to be able to launch the entire Apollo spacecraft from the Moon if a “direct ascent” mission mode had been selected) would burn for exactly four minutes and seven seconds to bend the spacecraft’s trajectory around the Moon into a closed orbit around that world.

If the SPS failed to fire for the LOI burn, it would be a huge disappointment but survivable. Apollo 8 would simply continue on its free-return trajectory, swing around the Moon, and fall back to Earth where it would perform a normal re-entry and splashdown. But if the engine fired and cut off too soon, the spacecraft would be placed into an orbit which would not return them to Earth, marooning the crew in space to die when their supplies ran out. If it burned just a little too long, the spacecraft’s trajectory would intersect the surface of the Moon—lithobraking is no way to land on the Moon.

When the SPS engine shut down precisely on time and the computer confirmed the velocity change of the burn and orbital parameters, the three astronauts were elated, but they were the only people in the solar system aware of the success. Apollo 8 was still behind the Moon, cut off from communications. The first clue Mission Control would have of the success or failure of the burn would be when Apollo 8’s telemetry signal was reacquired as it swung around the limb of the Moon. If too early, it meant the burn had failed and the spacecraft was coming back to Earth; that moment passed with no signal. Now tension mounted as the clock ticked off the seconds to the time expected for a successful burn. If that time came and went with no word from Apollo 8, it would be a really bad day. Just on time, the telemetry signal locked up and Jim Lovell reported, “Go ahead, Houston, this is Apollo 8. Burn complete. Our orbit 160.9 by 60.5.” (Lovell was using NASA’s preferred measure of nautical miles; in proper units it was 311 by 112 km. The orbit would subsequently be circularised by another SPS burn to 112.7 by 114.7 km.) The Mission Control room erupted into an un-NASA-like pandemonium of cheering.

Apollo 8 would orbit the Moon ten times, spending twenty hours in a retrograde orbit with an inclination of 12 degrees to the lunar equator, which would allow it to perform high-resolution photography of candidate sites for early landing missions under lighting conditions similar to those expected at the time of landing. In addition, precision tracking of the spacecraft’s trajectory in lunar orbit would allow mapping of the Moon’s gravitational field, including the “mascons” which perturb the orbits of objects in low lunar orbits and would be important for longer duration Apollo orbital missions in the future.

During the mission, the crew were treated to amazing sights and, in particular, the dramatic difference between the near side, with its many flat “seas”, and the rugged highlands of the far side. Coming around the Moon they saw the spectacle of earthrise for the first time and, hastily grabbing a magazine of colour film and setting aside the planned photography schedule, Bill Anders snapped the photo of the Earth rising above the lunar horizon which became one of the most iconic photographs of the twentieth century. Here is a reconstruction of the moment that photo was taken.

On the ninth and next-to-last orbit, the crew conducted a second television transmission which was broadcast worldwide. It was Christmas Eve on much of the Earth, and, coming at the end of the chaotic, turbulent, and often tragic year of 1968, it was a magical event, remembered fondly by almost everybody who witnessed it and felt pride for what the human species had just accomplished.

You have probably heard this broadcast from the Moon, often with the audio overlaid on imagery of the Moon from later missions, with much higher resolution than was actually seen in that broadcast. Here, in three parts, is what people, including this scrivener, actually saw on their televisions that enchanted night. The famous reading from Genesis is in the third part. This description is eerily similar to that in Jules Verne’s 1870 Autour de la lune.

After the end of the broadcast, it was time to prepare for the next and absolutely crucial maneuver, also performed on the far side of the Moon: trans-Earth injection, or TEI. This would boost the spacecraft out of lunar orbit and send it back on a trajectory to Earth. This time the SPS engine had to work, and perfectly. If it failed to fire, the crew would be trapped in orbit around the Moon with no hope of rescue. If it cut off too soon or burned too long, or the spacecraft was pointed in the wrong direction when it fired, Apollo 8 would miss the Earth and orbit forever far from its home planet or come in too steep and burn up when it hit the atmosphere. Once again the tension rose to a high pitch in Mission Control as the clock counted down to the two fateful times: this time they’d hear from the spacecraft earlier if it was on its way home and later or not at all if things had gone tragically awry. Exactly when expected, the telemetry screens came to life and a second later Jim Lovell called, “Houston, Apollo 8. Please be informed there is a Santa Claus.”

Now it was just a matter of falling the 375,000 kilometres from the Moon, hitting the precise re-entry corridor in the Earth’s atmosphere, executing the intricate “double dip” re-entry trajectory, and splashing down near the aircraft carrier which would retrieve the Command Module and crew. Earlier unmanned tests gave confidence it would all work, but this was the first time men would be trying it.

There was some unexpected and embarrassing excitement on the way home. Mission Control had called up a new set of co-ordinates for the “barbecue roll” which the spacecraft executed to even out temperature. Lovell was asked to enter “verb 3723, noun 501” into the computer. But, weary and short on sleep, he fat-fingered the commands and entered “verb 37, noun 01”. This told the computer the spacecraft was back on the launch pad, pointing straight up, and it immediately slewed to what it thought was that orientation. Lovell quickly figured out what he’d done, “It was my goof”, but by this time he’d “lost the platform”: the stable reference the guidance system used to determine in which direction the spacecraft was pointing in space. He had to perform a manual alignment, taking sightings on a number of stars, to recover the correct orientation of the stable platform. This was completely unplanned but, as it happens, in doing so Lovell acquired experience that would prove valuable when he had to perform the same operation in much more dire circumstances on Apollo 13 after an explosion disabled the computer and guidance system in the Command Module. Here is the author of the book, Jeffrey Kluger, discussing Jim Lovell’s goof.

The re-entry went completely as planned, flown entirely under computer control, with the spacecraft splashing into the Pacific Ocean just 6 km from the aircraft carrier Yorktown. But because the splashdown occurred before dawn, it was decided to wait until the sky brightened to recover the crew and spacecraft. Forty-three minutes after splashdown, divers from the Yorktown arrived at the scene, and forty-five minutes after that the crew was back on the ship. Apollo 8 was over, a total success. This milestone in the space race had been won definitively by the U.S., and shortly thereafter the Soviets abandoned their Zond circumlunar project, judging it an anticlimax and admission of defeat to fly by the Moon after the Americans had already successfully orbited it.

This is the official NASA contemporary documentary about Apollo 8.

Here is an evening with the Apollo 8 astronauts recorded at the National Air and Space Museum on 2008-11-13 to commemorate the fortieth anniversary of the flight.

This is a reunion of the Apollo 8 astronauts on 2009-04-23.

As of this writing, all of the crew of Apollo 8 are alive, and, in a business where divorce was common, remain married to the women they wed as young military officers.

Kluger, Jeffrey. Apollo 8. New York: Picador, 2017. ISBN 978-1-250-18251-7.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

The Theory of Dark Suckers

10 Cents and I were discussing light bulbs on the late night phone call. And it brought to mind an old piece of text explaining why we should not call them light bulbs, but rather “dark suckers”. I have not the time to convert this old text to incorporate the newer LED type of dark suckers, but here it is in the older format.

Enjoy. Continue reading “The Theory of Dark Suckers”


Users who have liked this post:

  • avatar

Seven Minutes of Terror

NASA Mars InSight landingNASA’s Mars InSight lander is now approaching the Red Planet and will attempt to land later today.  Here is a timeline of events during the entry, descent, and landing (EDL) phase if everything goes as planned (adapted from the NASA/JPL “Landing Milestones” page).  All times are in Universal Time (UTC), which you can see in the title bar at the top of the Ratburger page.

  • 19:40 UTC – Separation from the cruise stage that carried the mission to Mars
  • 19:41 UTC – Turn to orient the spacecraft properly for atmospheric entry
  • 19:47 UTC – Atmospheric entry at about 19,800 kilometres per hour, beginning the entry, descent and landing phase
  • 19:49 a.m.UTC – Peak heating of the protective heat shield reaches about 1,500 °C
  • 15 seconds later – Peak deceleration, with the intense heating causing possible temporary dropouts in radio signals
  • 19:51 UTC – Parachute deployment
  • 15 seconds later – Separation from the heat shield
  • 10 seconds later – Deployment of the lander’s three legs
  • 19:52 UTC- Activation of the radar that will sense the distance to the ground
  • 19:53 UTC – First acquisition of the radar signal
  • 20 seconds later – Separation from the back shell and parachute
  • 0.5 second later – The retrorockets, or descent engines, begin firing
  • 2.5 seconds later – Start of the “gravity turn” to get the lander into the proper orientation for landing
  • 22 seconds later – InSight begins slowing to a constant velocity (from 27 km/h to a constant 8 km/h) for its soft landing
  • 19:54 UTC – Expected touchdown on the surface of Mars
  • 20:01 UTC- “Beep” from InSight’s X-band radio directly back to Earth, indicating InSight is alive and functioning on the surface of Mars
  • No earlier than 20:04 UTC, but possibly the next day – First image from InSight on the surface of Mars

Here is a description of the entry, descent, and landing phase.

You can watch live coverage of InSight’s arrival at Mars starting at 18:30 UTC on:

Here is the Landing Day – 1 press briefing.

Two CubeSats called MarCO-A and B are shadowing InSight’s path.  They are the first CubeSats launched on an interplanetary trajectory.  If successful, they will provide a real-time communications link between the lander and Earth.  They are not, however, required for a successful landing.  If they fail, information on the landing may be delayed until it can be relayed by another spacecraft orbiting Mars.  After doing their job, the MarCO CubeSats will fly by Mars and continue to orbit the Sun for billions of years, just like Elon Musk’s roadster.  Here is a video about the MarCO mission.

Here are more details about MarCO.

Like 13+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Defense Distributed Declaration

In the ongoing litigation between Defense Distributed and state attorneys general over the distribution of three-dimensional models of firearms and components thereof over the Internet (which has been approved by all federal regulatory agencies), I was asked to submit an affidavit in support of the Defense Distributed case.  I have previously described this case here in my post “Code is Speech”.

Here is what I drafted which, after consultation with others whose efforts are much appreciated but will remain unnamed, will be submitted into the public record.  This is exactly what was submitted, less my signature: why make it easy for identity thieves?  This was submitted, as is done, in gnarly monospaced text with no mark-up.  If it shows up in your browser with awkward line breaks, try making the browser window wider and it should get better.   If you’re on a tablet or mobile phone, try it when you get back to the desktop.

The opening and closing paragraphs are as prescribed in 28 U.S.C. § 1746 for an “Unsworn declaration under penalty of perjury” by a non-U.S. person.  This is also called a “self-authenticating affidavit”.

This may seem lukewarm to those accustomed to my usual firebrand rhetoric.  In this declaration, I only wanted to state things which I knew or believed based upon my own personal experience.  Consequently, I eschewed discussing the state of the art in additive manufacturing (I have never actually seen nor used an additive manufacturing machine) or the limitations of present-day machines (all of that may, and probably will, change in a few years).

Attorneys for Defense Distributed expect to lose in the original district court litigation and the Ninth Circuit, but the purpose of this declaration is to be used in higher court appeals where there is a less ideological and more fact-based scrutiny of cases.

Although I really had better things to do this week, I was glad to take the time to support the Defense Distributed case.  Even if you don’t care about guns, the attorneys’ general position in this case argues that computer-mediated speech: the transmission of files from computer to computer, is not speech protected by the First Amendment.  This is arguably the greatest assault on free speech since the adoption of that amendment.

I am privileged to have the opportunity to oppose it.

(This declaration is a public document which will become part of the record of the trial and eventual appeals.  I am disclosing nothing here which will not be available to those following the litigation.)

                DECLARATION OF JOHN WALKER

I, John Walker, pursuant to 28 U.S.C. § 1746 hereby declare and
say as follows:

    1.  I was a co-founder of Autodesk, Inc. (ADSK:NASDAQ),
        developer of the AutoCAD® computer-aided design
        software.  I was president, chairman, and chief
        executive officer from the incorporation of the company
        in April 1982 until November 1986, more than a year
        after its initial public stock offering in June 1985. I
        continued to serve as chairman of the board of directors
        until April 1988, after which I concentrated on software
        development.

    2.  Autodesk is the developer of the AutoCAD® software, one
        of the most widely-used computer-aided design and
        drafting software packages in the world.  AutoCAD allows
        creation of two- and three-dimensional models of designs
        and, with third-party products, their analysis and
        fabrication.

    3.  During the start-up phase of Autodesk, I was one of the
        three principal software developers of AutoCAD and wrote
        around one third of the source code of the initial
        release of the program.

    4.  Subsequently, I contributed to the development of
        three-dimensional extensions of the original AutoCAD
        drafting system, was lead developer on AutoShade[tm],
        which produced realistic renderings of three-dimensional
        models, and developed the prototype of integration of
        constructive solid geometry into AutoCAD, which was
        subsequently marketed as the AutoCAD Advanced Modeling
        Extension (AME).

    5.  I retired from Autodesk in 1994 and since have had no
        connection with the company other than as a shareholder
        with less than 5% ownership of the company's common
        stock.

    Design Versus Fabrication

    6.  From my experience at Autodesk, I became aware of the
        distinction between the design of an object and the
        fabrication of that object from the design.  For
        example, the patent drawings and written description in
        firearms patents provide sufficient information "as to
        enable any person skilled in the art to which it
        pertains, or with which it is most nearly connected, to
        make and use the same, and shall set forth the best mode
        contemplated by the inventor or joint inventor of
        carrying out the invention" [35 U.S.C. § 112 (a)].  But
        this is in no way a mechanical process.  One must
        interpret the design, choose materials suitable for each
        component, and then decide which manufacturing process
        (milling, stamping, turning, casting, etc.) is best to
        produce it, including steps such as heat-treating and
        the application of coatings.  This process is called
        "production planning", and it is a human skill that is
        required to turn a design, published in a patent
        description or elsewhere, into a physical realisation of
        the object described by that design.

    7.  A three-dimensional model of an object specifies its
        geometry but does not specify the materials from which
        it is fabricated, how the fabrication is done, or any
        special steps required (for example, annealing or other
        heat treating, coatings, etc.) before the component is
        assembled into the design.

    8.  Three-dimensional models of physical objects have many
        other applications than computer-aided manufacturing.
        Three-dimensional models are built to permit analysis of
        designs including structural strength and heat flow via
        the finite element method.  Models permit rendering of
        realistic graphic images for product visualisation,
        illustration, and the production of training and service
        documentation.  Models can be used in simulations to
        study the properties and operation of designs prior to
        physically manufacturing them. Models for finite element
        analysis have been built since the 1960s, decades before
        the first additive manufacturing machines were
        demonstrated in the 1980s.

    9.  Some three-dimensional models contain information which
        goes well beyond a geometric description of an object
        for manufacturing.  For example, it is common to produce
        "parametric" models which describe a family of objects
        which can be generated by varying a set of inputs
        ("parameters").  For example, a three-dimensional model
        of a shoe could be parameterised to generate left and
        right shoes of various sizes and widths, with
        information within the model automatically adjusting the
        dimensions of the components of the shoe accordingly.
        The model is thus not the rote expression of a
        particular manufactured object but rather a description
        of a potentially unlimited number of objects where the
        intent of the human designer, in setting the parameters,
        determines the precise geometry of an object built from
        the model.

   10.  A three-dimensional model often expresses relationships
        among components of the model which facilitate analysis
        and parametric design.  Such a model can be thought of
        like a spreadsheet, in which the value of cells are
        determined by their mathematical relationships to other
        cells, as opposed to a static table of numbers printed
        on paper.

    Additive Manufacturing ("3D Printing")

   11.  Additive manufacturing (often called, confusingly, "3D
        [for three-dimensional] printing") is a technology by
        which objects are built to the specifications of a
        three-dimensional computer model by a device which
        fabricates the object by adding material according to
        the design.  Most existing additive manufacturing
        devices can only use a single material in a production
        run, which limits the complexity of objects they can
        fabricate.

   12.  Additive manufacturing, thus, builds up a part by adding
        material, while subtractive manufacturing (for example,
        milling, turning, and drilling) starts with a block of
        solid material and cuts away until the desired part is
        left.  Many machine shops have tools of both kinds, and
        these tools may be computer controlled.

   13.  Additive manufacturing is an alternative to traditional
        kinds of manufacturing such as milling, turning, and
        cutting.  With few exceptions, any object which can be
        produced by additive manufacturing can be produced, from
        paper drawings or their electronic equivalent, with
        machine tools that date from the 19th century.  Additive
        manufacturing is simply another machine tool, and the
        choice of whether to use it or other tools is a matter
        of economics and the properties of the part being
        manufactured.

   14.  Over time, machine tools have become easier to use.  The
        introduction of computer numerical control (CNC) machine
        tools has dramatically reduced the manual labour
        required to manufacture parts from a design.  The
        computer-aided design industry, of which Autodesk is a
        part, has, over the last half-century, reduced the cost
        of going from concept to manufactured part, increasing
        the productivity and competitiveness of firms which
        adopt it and decreasing the cost of products they make.
        Additive manufacturing is one of a variety of CNC
        machine tools in use today.

   15.  It is in no sense true that additive manufacturing
        allows the production of functional objects such as
        firearms from design files without human intervention.
        Just as a human trying to fabricate a firearm from its
        description in a patent filing (available in electronic
        form, like the additive manufacturing model), one must
        choose the proper material, its treatment, and how it is
        assembled into the completed product.  Thus, an additive
        manufacturing file describing the geometry of a
        component of a firearm is no more an actual firearm than
        a patent drawing of a firearm (published worldwide in
        electronic form by the U.S. Patent and Trademark Office)
        is a firearm.

    Computer Code and Speech

   16.  Computer programs and data files are indistinguishable
        from speech.  A computer file, including a
        three-dimensional model for additive manufacturing, can
        be expressed as text which one can print in a newspaper
        or pamphlet, declaim from a soapbox, or distribute via
        other media.  It may be boring to those unacquainted
        with its idioms, but it is speech nonetheless.  There is
        no basis on which to claim that computer code is not
        subject to the same protections as verbal speech or
        printed material.

   17.  For example, the following is the definition of a unit
        cube in the STL language used to to express models for
        many additive manufacturing devices.

            solid cube_corner
              facet normal 0.0 -1.0 0.0
                outer loop
                  vertex 0.0 0.0 0.0
                  vertex 1.0 0.0 0.0
                  vertex 0.0 0.0 1.0
                endloop
              endfacet
            endsolid

        This text can be written, read, and understood by a
        human familiar with the technology as well as by a
        computer.  It is entirely equivalent to a description of
        a unit cube written in English or another human
        language.  When read by a computer, it can be used for
        structural analysis, image rendering, simulation, and
        other applications as well as additive manufacturing.
        The fact that the STL language can be read by a computer
        in no way changes the fact that it is text, and thus,
        speech.

   18.  As an additional example, the following is an AutoCAD
        DXF[tm] file describing a two-dimensional line between
        the points (0, 0) and (1, 1), placed on layer 0 of a
        model.

            0
            SECTION
              2
            ENTITIES
              0
            LINE
              8
            0
             10
            0.0
            20
            0.0
            11
            1.0
            21
            1.0
              0
            ENDSEC
              0
            EOF

        Again, while perhaps not as easy to read as the STL file
        until a human has learned the structure of the file,
        this is clearly text, and thus speech.

   19.  It is common in computer programming and computer-aided
        design to consider computer code and data files written
        in textual form as simultaneously communicating to
        humans and computers.  Donald E. Knuth, professor
        emeritus of computer science at Stanford University and
        author of "The Art of Computer Programming", advised
        programmers:
            "Instead of imagining that our main task is to
            instruct a computer what to do, let us concentrate
            rather on explaining to human beings what we want a
            computer to do."[Knuth 1992]
        A design file, such as those illustrated above in
        paragraphs 17 and 18 is, similarly, a description of a
        design to a human as well as to a computer.  If it is a
        description of a physical object, a human machinist
        could use it to manufacture the object just as the
        object could be fabricated from the verbal description
        and drawings in a patent.

   20.  Computer code has long been considered text
        indistinguishable from any other form of speech in
        written form.  Many books, consisting in substantial
        part of computer code, have been published and are
        treated for the purpose of copyright and other
        intellectual property law like any other literary work.
        For example the "Numerical Recipes"[Press] series of
        books presents computer code in a variety of programming
        languages which implements fundamental algorithms for
        numerical computation.

    Conclusions

   21.  There is a clear distinction between the design of an
        artefact, whether expressed in paper drawings, a written
        description, or a digital geometric model, and an object
        manufactured from that design.

   22.  Manufacturing an artefact from a design, however
        expressed, is a process involving human judgement in
        selecting materials and the tools used to fabricate
        parts from it.

   23.  Additive manufacturing ("3D printing") is one of a
        variety of tools which can be used to fabricate parts.
        It is in no way qualitatively different from alternative
        tools such as milling machines, lathes, drills, saws,
        etc., all of which can be computer controlled.

   24.  A digital geometric model of an object is one form of
        description which can guide its fabrication.  As such,
        it is entirely equivalent to, for example, a dimensioned
        drawing (blueprint) from which a machinist works.

   25.  Digital geometric models of objects can be expressed
        as text which can be printed on paper or read aloud
        as well as stored and transmitted electronically.
        Thus they are speech.

    References
        [Knuth 1992]   Knuth, Donald E.  Literate Programming.
                       Stanford, CA: Center for the Study of
                       Language and Information, 1992.
                       ISBN: 978-0-937073-80-3.

        [Press]        Press, William H. et al.  Numerical Recipes.
                       Cambridge (UK): Cambridge University Press,
                       (various dates).
                       Programming language editions:
                           C++     978-0-521-88068-8
                           C       978-0-521-43108-8
                           Fortran 978-0-521-43064-7
                           Pascal  978-0-521-37516-0

I declare under penalty of perjury under the laws of the United
States of America that the foregoing is true and correct.

Executed on November 22, 2018

                                            (Signature)
                                 _______________________________
                                           John Walker
Like 14+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Ion-powered aircraft flies with no moving parts

Ladies and Gentlemen, we truly live in a wonderful age, an age of inventions not ever imagined by anyone before us, (us being those of this time).

Continue reading “Ion-powered aircraft flies with no moving parts”


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Gender-Queer Drag Queen Says Quantum Mechanics Explains Unlimited Genders

Reporter Megan Fox found an interesting hook for discussing my recent physics publication.

“Atomic physics kind of backed off from the Newtonian assumption of an objective reality to describe how atomic physics works,” said Schantz. “Physicists were operating under the assumption that there was no such thing as cause and effect. There is a strong desire in philosophy to undercut reality. Much like Plato’s allegory of the cave, they want to say all we have is a distorted version of reality and we cannot know what is real. You can see it in physics, that it has fallen out of favor to question how we know what we know. Instead we get propagandizing.”

Check out “Gender-Queer Drag Queen Says Quantum Mechanics Explains Unlimited Genders.”


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Tribbles

In the 1967 Star Trek episode “The Trouble with Tribbles” Dr McCoy discovers “Well, the nearest thing I can figure out is that they’re born pregnant—which seems to be quite a time-saver.”

I always thought this was one of the funniest lines in the episode.  It couldn’t really happen, though, could it?

Adactylidium mite

This is a mite of the genus Adactylidium.  It’s a lot smaller, and less furry and lovable than a tribble, but it’s essentially born pregnant.  The mite is a parasite which feeds on the eggs of tiny insects called thrips.  The female eats the egg and develops five to eight female offspring and one male in her body.  The male impregnates the unborn females, who then eat their way out of the mother’s body.  They then seek new eggs upon which to feed.  The male neither feeds nor seeks new mates and dies after a few hours.  The females who are successful in finding a thrips egg live for about four days, when they are eaten alive by their own offspring.

Which seems to be quite a time-saver.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: The Forgotten Genius of Oliver Heaviside

“The Forgotten Genius of Oliver Heaviside” by Basil MahonAt age eleven, in 1861, young Oliver Heaviside’s family, supported by his father’s irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family’s house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid’s geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.

At age fifteen he took the College of Perceptors examination, the equivalent of today’s A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.

His mother’s elder sister had married Charles Wheatstone, a successful and wealthy scientist, inventor, and entrepreneur whose inventions include the concertina, the stereoscope, and the Playfair encryption cipher, and who made major contributions to the development of telegraphy. Wheatstone took an interest in his bright nephew, and guided his self-studies after leaving school, encouraging him to master the Morse code and the German and Danish languages. Oliver’s favourite destination was the library, which he later described as “a journey into strange lands to go a book-tasting”. He read the original works of Newton, Laplace, and other “stupendous names” and discovered that with sufficient diligence he could figure them out on his own.

At age eighteen, he took a job as an assistant to his older brother Arthur, well-established as a telegraph engineer in Newcastle. Shortly thereafter, probably on the recommendation of Wheatstone, he was hired by the just-formed Danish-Norwegian-English Telegraph Company as a telegraph operator at a salary of £150 per year (around £12000 in today’s money). The company was about to inaugurate a cable under the North Sea between England and Denmark, and Oliver set off to Jutland to take up his new post. Long distance telegraphy via undersea cables was the technological frontier at the time—the first successful transatlantic cable had only gone into service two years earlier, and connecting the continents into a world-wide web of rapid information transfer was the booming high-technology industry of the age. While the job of telegraph operator might seem a routine clerical task, the élite who operated the undersea cables worked in an environment akin to an electrical research laboratory, trying to wring the best performance (words per minute) from the finicky and unreliable technology.

Heaviside prospered in the new job, and after a merger was promoted to chief operator at a salary of £175 per year and transferred back to England, at Newcastle. At the time, undersea cables were unreliable. It was not uncommon for the signal on a cable to fade and then die completely, most often due to a short circuit caused by failure of the gutta-percha insulation between the copper conductor and the iron sheath surrounding it. When a cable failed, there was no alternative but to send out a ship which would find the cable with a grappling hook, haul it up to the surface, cut it, and test whether the short was to the east or west of the ship’s position (the cable would work in the good direction but fail in that containing the short. Then the cable would be re-spliced, dropped back to the bottom, and the ship would set off in the direction of the short to repeat the exercise over and over until, by a process similar to binary search, the location of the fault was narrowed down and that section of the cable replaced. This was time consuming and potentially hazardous given the North Sea’s propensity for storms, and while the cable remained out of service it made no money for the telegraph company.

Heaviside, who continued his self-study and frequented the library when not at work, realised that knowing the resistance and length of the functioning cable, which could be easily measured, it would be possible to estimate the location of the short simply by measuring the resistance of the cable from each end after the short appeared. He was able to cancel out the resistance of the fault, creating a quadratic equation which could be solved for its location. The first time he applied this technique his bosses were sceptical, but when the ship was sent out to the location he predicted, 114 miles from the English coast, they quickly found the short circuit.

At the time, most workers in electricity had little use for mathematics: their trade journal, The Electrician (which would later publish much of Heaviside’s work) wrote in 1861, “In electricity there is seldom any need of mathematical or other abstractions; and although the use of formulæ may in some instances be a convenience, they may for all practical purpose be dispensed with.” Heaviside demurred: while sharing disdain for abstraction for its own sake, he valued mathematics as a powerful tool to understand the behaviour of electricity and attack problems of great practical importance, such as the ability to send multiple messages at once on the same telegraphic line and increase the transmission speed on long undersea cable links (while a skilled telegraph operator could send traffic at thirty words per minute on intercity land lines, the transatlantic cable could run no faster than eight words per minute). He plunged into calculus and differential equations, adding them to his intellectual armamentarium.

He began his own investigations and experiments and began to publish his results, first in English Mechanic, and then, in 1873, the prestigious Philosophical Magazine, where his work drew the attention of two of the most eminent workers in electricity: William Thomson (later Lord Kelvin) and James Clerk Maxwell. Maxwell would go on to cite Heaviside’s paper on the Wheatstone Bridge in the second edition of his Treatise on Electricity and Magnetism, the foundation of the classical theory of electromagnetism, considered by many the greatest work of science since Newton’s Principia, and still in print today. Heady stuff, indeed, for a twenty-two year old telegraph operator who had never set foot inside an institution of higher education.

Heaviside regarded Maxwell’s Treatise as the path to understanding the mysteries of electricity he encountered in his practical work and vowed to master it. It would take him nine years and change his life. He would become one of the first and foremost of the “Maxwellians”, a small group including Heaviside, George FitzGerald, Heinrich Hertz, and Oliver Lodge, who fully grasped Maxwell’s abstract and highly mathematical theory (which, like many subsequent milestones in theoretical physics, predicted the results of experiments without providing a mechanism to explain them, such as earlier concepts like an “electric fluid” or William Thomson’s intricate mechanical models of the “luminiferous ether”) and built upon its foundations to discover and explain phenomena unknown to Maxwell (who would die in 1879 at the age of just 48).

While pursuing his theoretical explorations and publishing papers, Heaviside tackled some of the main practical problems in telegraphy. Foremost among these was “duplex telegraphy”: sending messages in each direction simultaneously on a single telegraph wire. He invented a new technique and was even able to send two messages at the same time in both directions as fast as the operators could send them. This had the potential to boost the revenue from a single installed line by a factor of four. Oliver published his invention, and in doing so made an enemy of William Preece, a senior engineer at the Post Office telegraph department, who had invented and previously published his own duplex system (which would not work), that was not acknowledged in Heaviside’s paper. This would start a feud between Heaviside and Preece which would last the rest of their lives and, on several occasions, thwart Heaviside’s ambition to have his work accepted by mainstream researchers. When he applied to join the Society of Telegraph Engineers, he was rejected on the grounds that membership was not open to “clerks”. He saw the hand of Preece and his cronies at the Post Office behind this and eventually turned to William Thomson to back his membership, which was finally granted.

By 1874, telegraphy had become a big business and the work was increasingly routine. In 1870, the Post Office had taken over all domestic telegraph service in Britain and, as government is wont to do, largely stifled innovation and experimentation. Even at privately-owned international carriers like Oliver’s employer, operators were no longer concerned with the technical aspects of the work but rather tending automated sending and receiving equipment. There was little interest in the kind of work Oliver wanted to do: exploring the new horizons opened up by Maxwell’s work. He decided it was time to move on. So, he quit his job, moved back in with his parents in London, and opted for a life as an independent, unaffiliated researcher, supporting himself purely by payments for his publications.

With the duplex problem solved, the largest problem that remained for telegraphy was the slow transmission speed on long lines, especially submarine cables. The advent of the telephone in the 1870s would increase the need to address this problem. While telegraphic transmission on a long line slowed down the speed at which a message could be sent, with the telephone voice became increasingly distorted the longer the line, to the point where, after around 100 miles, it was incomprehensible. Until this was understood and a solution found, telephone service would be restricted to local areas.

Many of the early workers in electricity thought of it as something like a fluid, where current flowed through a wire like water through a pipe. This approximation is more or less correct when current flow is constant, as in a direct current generator powering electric lights, but when current is varying a much more complex set of phenomena become manifest which require Maxwell’s theory to fully describe. Pioneers of telegraphy thought of their wires as sending direct current which was simply switched off and on by the sender’s key, but of course the transmission as a whole was a varying current, jumping back and forth between zero and full current at each make or break of the key contacts. When these transitions are modelled in Maxwell’s theory, one finds that, depending upon the physical properties of the transmission line (its resistance, inductance, capacitance, and leakage between the conductors) different frequencies propagate along the line at different speeds. The sharp on/off transitions in telegraphy can be thought of, by Fourier transform, as the sum of a wide band of frequencies, with the result that, when each propagates at a different speed, a short, sharp pulse sent by the key will, at the other end of the long line, be “smeared out” into an extended bump with a slow rise to a peak and then decay back to zero. Above a certain speed, adjacent dots and dashes will run into one another and the message will be undecipherable at the receiving end. This is why operators on the transatlantic cables had to send at the painfully slow speed of eight words per minute.

In telephony, it’s much worse because human speech is composed of a broad band of frequencies, and the frequencies involved (typically up to around 3400 cycles per second) are much higher than the off/on speeds in telegraphy. The smearing out or dispersion as frequencies are transmitted at different speeds results in distortion which renders the voice signal incomprehensible beyond a certain distance.

In the mid-1850s, during development of the first transatlantic cable, William Thomson had developed a theory called the “KR law” which predicted the transmission speed along a cable based upon its resistance and capacitance. Thomson was aware that other effects existed, but without Maxwell’s theory (which would not be published in its final form until 1873), he lacked the mathematical tools to analyse them. The KR theory, which produced results that predicted the behaviour of the transatlantic cable reasonably well, held out little hope for improvement: decreasing the resistance and capacitance of the cable would dramatically increase its cost per unit length.

Heaviside undertook to analyse what is now called the transmission line problem using the full Maxwell theory and, in 1878, published the general theory of propagation of alternating current through transmission lines, what are now called the telegrapher’s equations. Because he took resistance, capacitance, inductance, and leakage all into account and thus modelled both the electric and magnetic field created around the wire by the changing current, he showed that by balancing these four properties it was possible to design a transmission line which would transmit all frequencies at the same speed. In other words, this balanced transmission line would behave for alternating current (including the range of frequencies in a voice signal) just like a simple wire did for direct current: the signal would be attenuated (reduced in amplitude) with distance but not distorted.

In an 1887 paper, he further showed that existing telegraph and telephone lines could be made nearly distortionless by adding loading coils to increase the inductance at points along the line (as long as the distance between adjacent coils is small compared to the wavelength of the highest frequency carried by the line). This got him into another battle with William Preece, whose incorrect theory attributed distortion to inductance and advocated minimising self-inductance in long lines. Preece moved to block publication of Heaviside’s work, with the result that the paper on distortionless telephony, published in The Electrician, was largely ignored. It was not until 1897 that AT&T in the United States commissioned a study of Heaviside’s work, leading to patents eventually worth millions. The credit, and financial reward, went to Professor Michael Pupin of Columbia University, who became another of Heaviside’s life-long enemies.

You might wonder why what seems such a simple result (which can be written in modern notation as the equation L/R = C/G) which had such immediate technological utlilty eluded so many people for so long (recall that the problem with slow transmission on the transatlantic cable had been observed since the 1850s). The reason is the complexity of Maxwell’s theory and the formidably difficult notation in which it was expressed. Oliver Heaviside spent nine years fully internalising the theory and its implications, and he was one of only a handful of people who had done so and, perhaps, the only one grounded in practical applications such as telegraphy and telephony. Concurrent with his work on transmission line theory, he invented the mathematical field of vector calculus and, in 1884, reformulated Maxwell’s original theory which, written in modern notation less cumbersome than that employed by Maxwell, looks like:

Maxwell's original 20 equations

into the four famous vector equations we today think of as Maxwell’s.

Maxwell's equations: modern vector calculus representation

These are not only simpler, condensing twenty equations to just four, but provide (once you learn the notation and meanings of the variables) an intuitive sense for what is going on. This made, for the first time, Maxwell’s theory accessible to working physicists and engineers interested in getting the answer out rather than spending years studying an arcane theory. (Vector calculus was independently invented at the same time by the American J. Willard Gibbs. Heaviside and Gibbs both acknowledged the work of the other and there was no priority dispute. The notation we use today is that of Gibbs, but the mathematical content of the two formulations is essentially identical.)

And, during the same decade of the 1880s, Heaviside invented the operational calculus, a method of calculation which reduces the solution of complicated problems involving differential equations to simple algebra. Heaviside was able to solve so many problems which others couldn’t because he was using powerful computational tools they had not yet adopted. The situation was similar to that of Isaac Newton who was effortlessly solving problems such as the brachistochrone using the calculus he’d invented while his contemporaries struggled with more cumbersome methods. Some of the things Heaviside did in the operational calculus, such as cancel derivative signs in equations and take the square root of a derivative sign made rigorous mathematicians shudder but, hey, it worked and that was good enough for Heaviside and the many engineers and applied mathematicians who adopted his methods. (In the 1920s, pure mathematicians used the theory of Laplace transforms to reformulate the operational calculus in a rigorous manner, but this was decades after Heaviside’s work and long after engineers were routinely using it in their calculations.)

Heaviside’s intuitive grasp of electromagnetism and powerful computational techniques placed him in the forefront of exploration of the field. He calculated the electric field of a moving charged particle and found it contracted in the direction of motion, foreshadowing the Lorentz-FitzGerald contraction which would figure in Einstein’s special relativity. In 1889 he computed the force on a point charge moving in an electromagnetic field, which is now called the Lorentz force after Hendrik Lorentz who independently discovered it six years later. He predicted that a charge moving faster than the speed of light in a medium (for example, glass or water) would emit a shock wave of electromagnetic radiation; in 1934 Pavel Cherenkov experimentally discovered the phenomenon, now called Cherenkov radiation, for which he won the Nobel Prize in 1958. In 1902, Heaviside applied his theory of transmission lines to the Earth as a whole and explained the propagation of radio waves over intercontinental distances as due to a transmission line formed by conductive seawater and a hypothetical conductive layer in the upper atmosphere dubbed the Heaviside layer. In 1924 Edward V. Appleton confirmed the existence of such a layer, the ionosphere, and won the Nobel prize in 1947 for the discovery.

Oliver Heaviside never won a Nobel Price, although he was nominated for the physics prize in 1912. He shouldn’t have felt too bad, though, as other nominees passed over for the prize that year included Hendrik Lorentz, Ernst Mach, Max Planck, and Albert Einstein. (The winner that year was Gustaf Dalén, “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys”—oh well.) He did receive Britain’s highest recognition for scientific achievement, being named a Fellow of the Royal Society in 1891. In 1921 he was the first recipient of the Faraday Medal from the Institution of Electrical Engineers.

Having never held a job between 1874 and his death in 1925, Heaviside lived on his irregular income from writing, the generosity of his family, and, from 1896 onward a pension of £120 per year (less than his starting salary as a telegraph operator in 1868) from the Royal Society. He was a proud man and refused several other offers of money which he perceived as charity. He turned down an offer of compensation for his invention of loading coils from AT&T when they refused to acknowledge his sole responsibility for the invention. He never married, and in his elder years became somewhat of a recluse and, although he welcomed visits from other scientists, hardly ever left his home in Torquay in Devon.

His impact on the physics of electromagnetism and the craft of electrical engineering can be seen in the list of terms he coined which are in everyday use: “admittance”, “conductance”, “electret”, “impedance”, “inductance”, “permeability”, “permittance”, “reluctance”, and “susceptance”. His work has never been out of print, and sparkles with his intuition, mathematical prowess, and wicked wit directed at those he considered pompous or lost in needless abstraction and rigor. He never sought the limelight and among those upon whose work much of our present-day technology is founded, he is among the least known. But as long as electronic technology persists, it is a monument to the life and work of Oliver Heaviside.

Mahon, Basil. The Forgotten Genius of Oliver Heaviside. Amherst, NY: Prometheus Books, 2017. ISBN 978-1-63388-331-4.

Like 18+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Regarding John’s recent post…

Regarding John’s recent post about daylight saving time, I did some more research and found an image from last year. It shows the unusual effort necessary to accomplish this task in some areas. I don’t doubt the same thing will happen again this year.

Continue reading “Regarding John’s recent post…”


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Tracking books

I didn’t want to step on John Walker’s post about citing books in posts, but the image of a barcode in conjunction with identifying books triggered some old memories.

One of the early projects (late 60’s, early 70’s) at my first “real” job was to help build a system which would identify paperback books by optical recognition of the cover.  My company had done several military pattern recognition projects and we were approached by a New Jersey company to build two machines which would recognize paper back book covers.  The customer’s business model was based on the need to sort out the book returns from retailers.  They did not trust the retailer to provide a legitimate count, so the existing system was to ship (by boat) the books to Puerto Rico where cheap labor would do the counting.  The new company would strip the covers off the books and then run them through the machines we were building which would provide an more timely accounting of the different titles returned.  The reason the covers were stripped off was to make handling them easier and the fact that it was more expensive to get the whole books back into circulation at a retailer who would sell them than to just publish new books and ship them.  A side effect was that we could get all the paperbacks we wanted – without covers, of course.

In one of our first meetings to discuss the project, we Engineers (obviously not marketing types) tried to un-sell the recognition project by suggesting they just put a barcode on the books.  We were told in no uncertain terms that “The American public will never put up with a barcode on a retail product”.  (I just checked and the first barcodes were put on Wrigley gum in June of 1974, so they got a couple of years before competition hit)

One thing I learned from that project was that the problem as stated : “recognize paperback book covers” was only part of the real business solution.  After we were done, my mentor left us and went up to New Jersey to work with the customer to tie the recognition system into an accounting and billing system.  There is always a bigger picture.

One other thing about the project was that the book cover scanner was to be built so that the covers were shredded as soon as they were recognized.  They didn’t trust the operators to not feed them through again to pad the count.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Electromagnetic Discovery May Demystify Quantum Mechanics

Here’s a press release from Q-Track on my discovery and publication… Hans

Physicists have long been troubled by the paradoxes and contradictions of quantum mechanics. Yesterday, a possible step forward appeared in the Philosophical Transactions of the Royal Society A. In a paper, “Energy velocity and reactive fields” [pay wall, free preprint], physicist Hans G. Schantz, presents a novel way of looking at electromagnetics that shows the deep tie between electromagnetics and the pilot wave interpretation of quantum mechanics.

Schantz offers a solution to wave-particle duality by arguing that electromagnetic fields and energy are distinct phenomena instead of treating them as two aspects of the same thing. “Fields guide energy” in Schantz’s view. “As waves interfere, they guide energy along paths that may be substantially different from the trajectories of the waves themselves.” Schantz’s entirely classical perspective appears remarkably similar to the “pilot-wave” theory of quantum mechanics.

Schantz’s approach to electromagnetic theory focuses on the balance between electric and magnetic energy. When there are equal amount of electric and magnetic energy, energy moves at the speed of light. As the ratio shifts away from an equal balance, energy slows down, coming to a rest in the limit of electrostatic or magnetic static fields. From this observation, Schantz derives a way to quantify the state of the electromagnetic field on a continuum between static and radiation fields, and ties this directly to the energy velocity.

“The fascinating result is that fields guide energy in a way exactly analogous to the way in which pilot waves guide particles in the Bohm-deBroglie theory,” Schantz explains. “Rather than an ad hoc approach to explain away the contradictions of quantum mechanics, pilot wave theory appears to be the natural application of classical electromagnetic ideas in the quantum realm.”

His solution to the “two slit” experiment that has perplexed generations of physicists?

“Fields behave like waves. When they interact with the two slits, they generate an interference pattern. The interference pattern guides a photon along a path to one of the screen. It’s not the photon interfering with itself. It’s the interfering waves guiding the photon.”

So which slit did the photon pass through?

“If the photon ends up on the left hand side of the screen, it went through the left slit. If it ends up on the right hand side of the screen, it went through the right slit. It really is that simple.”

Schantz applied these electromagnetic ideas to understand and explain how antennas work in his textbook, The Art and Science of Ultrawideband Antennas (Artech House 2015). He’s also co-founder and CTO of Q-Track Corporation, a company that applies near-field wireless to the challenging problem of indoor location. “There are things you can do with low-frequency long-wavelength signals that simply aren’t possible with conventional wireless systems,” Schantz explains. “Understanding how static or reactive energy transforms into radiation has direct applications to antenna design as well near-field wireless systems.”

Schantz chose an unconventional way of popularizing his ideas. “I was amazed that my electromagnetic perspective was not discovered and adopted over a hundred years ago. It was as if someone had deliberately suppressed the discovery, so I undertook to write a science fiction series based on that premise.” Schantz’s Hidden Truth series debuted in 2016, and he released the third volume in the series, The Brave and the Bold, in October.

Schantz’s next project is a popular treatment of his physics ideas. Edited by L. Jagi Lamplighter Wright, Schantz’s book Fields: The Once and Future Theory of Everything will appear in 2019.

Like 12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

An interesting video…. (from Pangea to Today)

An interesting video, granted it’s based on a christian point of view, but still interesting. It seems they, the makers of this clip, have all their ducks in a row.

Noah’s Flood and Catastrophic Plate Tectonics (from Pangea to Today)

Anyone want to take it apart?

Continue reading “An interesting video…. (from Pangea to Today)”


Users who have liked this post:

  • avatar
  • avatar

Standards II (revisited)

OK, the last post about standards drifted way off topic, or so it seemed to some. I tried to get a screen grab of an interview with the owner as seen on FOX News. Since I could not get a direct link to the clip, I grabbed it and reduced it in size to post. Unfortunately the video clip is still too large, even after I reduced the resolution by 50%, so here is the audio from the clip. The video just included stock footage that many have seen before. The point is that he took the effort to exceed standards, deeper pilings, special windows and accepting the fact that the first floor would be swept away.


Users who have liked this post:

  • avatar
  • avatar