## Saturday Night Science: Einstein’s Unfinished Revolution

In the closing years of the nineteenth century, one of those nagging little discrepancies vexing physicists was the behaviour of the photoelectric effect. Originally discovered in 1887, the phenomenon causes certain metals, when illuminated by light, to absorb the light and emit electrons. The perplexing point was that there was a minimum wavelength (colour of light) necessary for electron emission, and for longer wavelengths, no electrons would be emitted at all, regardless of the intensity of the beam of light. For example, a certain metal might emit electrons when illuminated by green, blue, violet, and ultraviolet light, with the intensity of electron emission proportional to the light intensity, but red or yellow light, regardless of how intense, would not result in a single electron being emitted.

This didn’t make any sense. According to Maxwell’s wave theory of light, which was almost universally accepted and had passed stringent experimental tests, the energy of light depended upon the amplitude of the wave (its intensity), not the wavelength (or, reciprocally, its frequency). And yet the photoelectric effect didn’t behave that way—it appeared that whatever was causing the electrons to be emitted depended on the wavelength of the light, and what’s more, there was a sharp cut-off below which no electrons would be emitted at all.

In 1905, in one of his “miracle year” papers, “On a Heuristic Viewpoint Concerning the Production and Transformation of Light”, Albert Einstein suggested a solution to the puzzle. He argued that light did not propagate as a wave at all, but rather in discrete particles, or “quanta”, later named “photons”, whose energy was proportional to the wavelength of the light. This neatly explained the behaviour of the photoelectric effect. Light with a wavelength longer than the cut-off point was transmitted by photons whose energy was too low to knock electrons out of metal they illuminated, while those above the threshold could liberate electrons. The intensity of the light was a measure of the number of photons in the beam, unrelated to the energy of the individual photons.

This paper became one of the cornerstones of the revolutionary theory of quantum mechanics, the complete working out of which occupied much of the twentieth century. Quantum mechanics underlies the standard model of particle physics, which is arguably the most thoroughly tested theory in the history of physics, with no experiment showing results which contradict its predictions since it was formulated in the 1970s. Quantum mechanics is necessary to explain the operation of the electronic and optoelectronic devices upon which our modern computing and communication infrastructure is built, and describes every aspect of physical chemistry.

But quantum mechanics is weird. Consider: if light consists of little particles, like bullets, then why when you shine a beam of light on a barrier with two slits do you get an interference pattern with bright and dark bands precisely as you get with, say, water waves? And if you send a single photon at a time and try to measure which slit it went through, you find it always went through one or the other, but then the interference pattern goes away. It seems like whether the photon behaves as a wave or a particle depends upon how you look at it. If you have an hour, here is grand master explainer Richard Feynman (who won his own Nobel Prize in 1965 for reconciling the quantum mechanical theory of light and the electron with Einstein’s special relativity) exploring how profoundly weird the double slit experiment is.

Fundamentally, quantum mechanics seems to violate the principle of realism, which the author defines as follows.

The belief that there is an objective physical world whose properties are independent of what human beings know or which experiments we choose to do. Realists also believe that there is no obstacle in principle to our obtaining complete knowledge of this world.

This has been part of the scientific worldview since antiquity and yet quantum mechanics, confirmed by innumerable experiments, appears to indicate we must abandon it. Quantum mechanics says that what you observe depends on what you choose to measure; that there is an absolute limit upon the precision with which you can measure pairs of properties (for example position and momentum) set by the uncertainty principle; that it isn’t possible to predict the outcome of experiments but only the probability among a variety of outcomes; and that particles which are widely separated in space and time but which have interacted in the past are entangled and display correlations which no classical mechanistic theory can explain—Einstein called the latter “spooky action at a distance”. Once again, all of these effects have been confirmed by precision experiments and are not fairy castles erected by theorists.

From the formulation of the modern quantum theory in the 1920s, often called the Copenhagen interpretation after the location of the institute where one of its architects, Neils Bohr, worked, a number of eminent physicists including Einstein and Louis de Broglie were deeply disturbed by its apparent jettisoning of the principle of realism in favour of what they considered a quasi-mystical view in which the act of “measurement” (whatever that means) caused a physical change (wave function collapse) in the state of a system. This seemed to imply that the photon, or electron, or anything else, did not have a physical position until it interacted with something else: until then it was just an immaterial wave function which filled all of space and (when squared) gave the probability of finding it at that location.

In 1927, de Broglie proposed a pilot wave theory as a realist alternative to the Copenhagen interpretation. In the pilot wave theory there is a real particle, which has a definite position and momentum at all times. It is guided in its motion by a pilot wave which fills all of space and is defined by the medium through which it propagates. We cannot predict the exact outcome of measuring the particle because we cannot have infinitely precise knowledge of its initial position and momentum, but in principle these quantities exist and are real. There is no “measurement problem” because we always detect the particle, not the pilot wave which guides it. In its original formulation, the pilot wave theory exactly reproduced the predictions of the Copenhagen formulation, and hence was not a competing theory but rather an alternative interpretation of the equations of quantum mechanics. Many physicists who preferred to “shut up and calculate” considered interpretations a pointless exercise in phil-oss-o-phy, but de Broglie and Einstein placed great value on retaining the principle of realism as a cornerstone of theoretical physics. Lee Smolin sketches an alternative reality in which “all the bright, ambitious students flocked to Paris in the 1930s to follow de Broglie, and wrote textbooks on pilot wave theory, while Bohr became a footnote, disparaged for the obscurity of his unnecessary philosophy”. But that wasn’t what happened: among those few physicists who pondered what the equations meant about how the world really works, the Copenhagen view remained dominant.

In the 1950s, independently, David Bohm invented a pilot wave theory which he developed into a complete theory of nonrelativistic quantum mechanics. To this day, a small community of “Bohmians” continue to explore the implications of his theory, working on extending it to be compatible with special relativity. From a philosophical standpoint the de Broglie-Bohm theory is unsatisfying in that it involves a pilot wave which guides a particle, but upon which the particle does not act. This is an “unmoved mover”, which all of our experience of physics argues does not exist. For example, Newton’s third law of motion holds that every action has an equal and opposite reaction, and in Einstein’s general relativity, spacetime tells mass-energy how to move while mass-energy tells spacetime how to curve. It seems odd that the pilot wave could be immune from influence of the particle it guides. A few physicists, such as Jack Sarfatti, have proposed “post-quantum” extensions to Bohm’s theory in which there is back-reaction from the particle on the pilot wave, and argue that this phenomenon might be accessible to experimental tests which would distinguish post-quantum phenomena from the predictions of orthodox quantum mechanics. A few non-physicist crackpots have suggested these phenomena might even explain flying saucers.

Moving on from pilot wave theory, the author explores other attempts to create a realist interpretation of quantum mechanics: objective collapse of the wave function, as in the Penrose interpretation; the many worlds interpretation (which Smolin calls “magical realism”); and decoherence of the wavefunction due to interaction with the environment. He rejects all of them as unsatisfying, because they fail to address glaring lacunæ in quantum theory which are apparent from its very equations.

The twentieth century gave us two pillars of theoretical physics: quantum mechanics and general relativity—Einstein’s geometric theory of gravitation. Both have been tested to great precision, but they are fundamentally incompatible with one another. Quantum mechanics describes the very small: elementary particles, atoms, and molecules. General relativity describes the very large: stars, planets, galaxies, black holes, and the universe as a whole. In the middle, where we live our lives, neither much affects the things we observe, which is why their predictions seem counter-intuitive to us. But when you try to put the two theories together, to create a theory of quantum gravity, the pieces don’t fit. Quantum mechanics assumes there is a universal clock which ticks at the same rate everywhere in the universe. But general relativity tells us this isn’t so: a simple experiment shows that a clock runs slower when it’s in a gravitational field. Quantum mechanics says that it isn’t possible to determine the position of a particle without its interacting with another particle, but general relativity requires the knowledge of precise positions of particles to determine how spacetime curves and governs the trajectories of other particles. There are a multitude of more gnarly and technical problems in what Stephen Hawking called “consummating the fiery marriage between quantum mechanics and general relativity”. In particular, the equations of quantum mechanics are linear, which means you can add together two valid solutions and get another valid solution, while general relativity is nonlinear, where trying to disentangle the relationships of parts of the systems quickly goes pear-shaped and many of the mathematical tools physicists use to understand systems (in particular, perturbation theory) blow up in their faces.

Ultimately, Smolin argues, giving up realism means abandoning what science is all about: figuring out what is really going on. The incompatibility of quantum mechanics and general relativity provides clues that there may be a deeper theory to which both are approximations that work in certain domains (just as Newtonian mechanics is an approximation of special relativity which works when velocities are much less than the speed of light). Many people have tried and failed to “quantise general relativity”. Smolin suggests the problem is that quantum theory itself is incomplete: there is a deeper theory, a realistic one, to which our existing theory is only an approximation which works in the present universe where spacetime is nearly flat. He suggests that candidate theories must contain a number of fundamental principles. They must be background independent, like general relativity, and discard such concepts as fixed space and a universal clock, making both dynamic and defined based upon the components of a system. Everything must be relational: there is no absolute space or time; everything is defined in relation to something else. Everything must have a cause, and there must be a chain of causation for every event which traces back to its causes; these causes flow only in one direction. There is reciprocity: any object which acts upon another object is acted upon by that object. Finally, there is the “identity of indescernibles”: two objects which have exactly the same properties are the same object (this is a little tricky, but the idea is that if you cannot in some way distinguish two objects [for example, by their having different causes in their history], then they are the same object).

This argues that what we perceive, at the human scale and even in our particle physics experiments, as space and time are actually emergent properties of something deeper which was manifest in the early universe and in extreme conditions such as gravitational collapse to black holes, but hidden in the bland conditions which permit us to exist. Further, what we believe to be “laws” and “constants” may simply be precedents established by the universe as it tries to figure out how to handle novel circumstances. Just as complex systems like markets and evolution in ecosystems have rules that change based upon events within them, maybe the universe is “making it up as it goes along”, and in the early universe, far from today’s near-equilibrium, wild and crazy things happened which may explain some of the puzzling properties of the universe we observe today.

This needn’t forever remain in the realm of speculation. It is easy, for example, to synthesise a protein which has never existed before in the universe (it’s an example of a combinatorial explosion). You might try, for example, to crystallise this novel protein and see how difficult it is, then try again later and see if the universe has learned how to do it. To be extra careful, do it first on the International Space Station and then in a lab on the Earth. I suggested this almost twenty years ago as a test of Rupert Sheldrake’s theory of morphic resonance, but (although doubtless Smolin would shun me for associating his theory with that one), it might produce interesting results.

The book concludes with a very personal look at the challenges facing a working scientist who has concluded the paradigm accepted by the overwhelming majority of his or her peers is incomplete and cannot be remedied by incremental changes based upon the existing foundation. He notes:

There is no more reasonable bet than that our current knowledge is incomplete. In every era of the past our knowledge was incomplete; why should our period be any different? Certainly the puzzles we face are at least as formidable as any in the past. But almost nobody bets this way. This puzzles me.

Well, it doesn’t puzzle me. Ever since I learned classical economics, I’ve always learned to look at the incentives in a system. When you regard academia today, there is huge risk and little reward to get out a new notebook, look at the first blank page, and strike out in an entirely new direction. Maybe if you were a twenty-something patent examiner in a small city in Switzerland in 1905 with no academic career or reputation at risk you might go back to first principles and overturn space, time, and the wave theory of light all in one year, but today’s institutional structure makes it almost impossible for a young researcher (and revolutionary ideas usually come from the young) to strike out in a new direction. It is a blessing that we have deep thinkers such as Lee Smolin setting aside the easy path to retirement to ask these deep questions today.

Smolin, Lee. Einstein’s Unfinished Revolution. New York: Penguin Press, 2019. ISBN 978-1-59420-619-1.

Here is a lecture by the author at the Perimeter Institute about the topics discussed in the book. He concentrates mostly on the problems with quantum theory and not the speculative solutions discussed in the latter part of the book.

8+

Users who have liked this post:

## Gender-Queer Drag Queen Says Quantum Mechanics Explains Unlimited Genders

Reporter Megan Fox found an interesting hook for discussing my recent physics publication.

“Atomic physics kind of backed off from the Newtonian assumption of an objective reality to describe how atomic physics works,” said Schantz. “Physicists were operating under the assumption that there was no such thing as cause and effect. There is a strong desire in philosophy to undercut reality. Much like Plato’s allegory of the cave, they want to say all we have is a distorted version of reality and we cannot know what is real. You can see it in physics, that it has fallen out of favor to question how we know what we know. Instead we get propagandizing.”

5+

Users who have liked this post:

## Saturday Night Science: The Forgotten Genius of Oliver Heaviside

At age eleven, in 1861, young Oliver Heaviside’s family, supported by his father’s irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family’s house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid’s geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.

At age fifteen he took the College of Perceptors examination, the equivalent of today’s A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.

His mother’s elder sister had married Charles Wheatstone, a successful and wealthy scientist, inventor, and entrepreneur whose inventions include the concertina, the stereoscope, and the Playfair encryption cipher, and who made major contributions to the development of telegraphy. Wheatstone took an interest in his bright nephew, and guided his self-studies after leaving school, encouraging him to master the Morse code and the German and Danish languages. Oliver’s favourite destination was the library, which he later described as “a journey into strange lands to go a book-tasting”. He read the original works of Newton, Laplace, and other “stupendous names” and discovered that with sufficient diligence he could figure them out on his own.

At age eighteen, he took a job as an assistant to his older brother Arthur, well-established as a telegraph engineer in Newcastle. Shortly thereafter, probably on the recommendation of Wheatstone, he was hired by the just-formed Danish-Norwegian-English Telegraph Company as a telegraph operator at a salary of £150 per year (around £12000 in today’s money). The company was about to inaugurate a cable under the North Sea between England and Denmark, and Oliver set off to Jutland to take up his new post. Long distance telegraphy via undersea cables was the technological frontier at the time—the first successful transatlantic cable had only gone into service two years earlier, and connecting the continents into a world-wide web of rapid information transfer was the booming high-technology industry of the age. While the job of telegraph operator might seem a routine clerical task, the élite who operated the undersea cables worked in an environment akin to an electrical research laboratory, trying to wring the best performance (words per minute) from the finicky and unreliable technology.

Heaviside prospered in the new job, and after a merger was promoted to chief operator at a salary of £175 per year and transferred back to England, at Newcastle. At the time, undersea cables were unreliable. It was not uncommon for the signal on a cable to fade and then die completely, most often due to a short circuit caused by failure of the gutta-percha insulation between the copper conductor and the iron sheath surrounding it. When a cable failed, there was no alternative but to send out a ship which would find the cable with a grappling hook, haul it up to the surface, cut it, and test whether the short was to the east or west of the ship’s position (the cable would work in the good direction but fail in that containing the short. Then the cable would be re-spliced, dropped back to the bottom, and the ship would set off in the direction of the short to repeat the exercise over and over until, by a process similar to binary search, the location of the fault was narrowed down and that section of the cable replaced. This was time consuming and potentially hazardous given the North Sea’s propensity for storms, and while the cable remained out of service it made no money for the telegraph company.

Heaviside, who continued his self-study and frequented the library when not at work, realised that knowing the resistance and length of the functioning cable, which could be easily measured, it would be possible to estimate the location of the short simply by measuring the resistance of the cable from each end after the short appeared. He was able to cancel out the resistance of the fault, creating a quadratic equation which could be solved for its location. The first time he applied this technique his bosses were sceptical, but when the ship was sent out to the location he predicted, 114 miles from the English coast, they quickly found the short circuit.

At the time, most workers in electricity had little use for mathematics: their trade journal, The Electrician (which would later publish much of Heaviside’s work) wrote in 1861, “In electricity there is seldom any need of mathematical or other abstractions; and although the use of formulæ may in some instances be a convenience, they may for all practical purpose be dispensed with.” Heaviside demurred: while sharing disdain for abstraction for its own sake, he valued mathematics as a powerful tool to understand the behaviour of electricity and attack problems of great practical importance, such as the ability to send multiple messages at once on the same telegraphic line and increase the transmission speed on long undersea cable links (while a skilled telegraph operator could send traffic at thirty words per minute on intercity land lines, the transatlantic cable could run no faster than eight words per minute). He plunged into calculus and differential equations, adding them to his intellectual armamentarium.

He began his own investigations and experiments and began to publish his results, first in English Mechanic, and then, in 1873, the prestigious Philosophical Magazine, where his work drew the attention of two of the most eminent workers in electricity: William Thomson (later Lord Kelvin) and James Clerk Maxwell. Maxwell would go on to cite Heaviside’s paper on the Wheatstone Bridge in the second edition of his Treatise on Electricity and Magnetism, the foundation of the classical theory of electromagnetism, considered by many the greatest work of science since Newton’s Principia, and still in print today. Heady stuff, indeed, for a twenty-two year old telegraph operator who had never set foot inside an institution of higher education.

Heaviside regarded Maxwell’s Treatise as the path to understanding the mysteries of electricity he encountered in his practical work and vowed to master it. It would take him nine years and change his life. He would become one of the first and foremost of the “Maxwellians”, a small group including Heaviside, George FitzGerald, Heinrich Hertz, and Oliver Lodge, who fully grasped Maxwell’s abstract and highly mathematical theory (which, like many subsequent milestones in theoretical physics, predicted the results of experiments without providing a mechanism to explain them, such as earlier concepts like an “electric fluid” or William Thomson’s intricate mechanical models of the “luminiferous ether”) and built upon its foundations to discover and explain phenomena unknown to Maxwell (who would die in 1879 at the age of just 48).

While pursuing his theoretical explorations and publishing papers, Heaviside tackled some of the main practical problems in telegraphy. Foremost among these was “duplex telegraphy”: sending messages in each direction simultaneously on a single telegraph wire. He invented a new technique and was even able to send two messages at the same time in both directions as fast as the operators could send them. This had the potential to boost the revenue from a single installed line by a factor of four. Oliver published his invention, and in doing so made an enemy of William Preece, a senior engineer at the Post Office telegraph department, who had invented and previously published his own duplex system (which would not work), that was not acknowledged in Heaviside’s paper. This would start a feud between Heaviside and Preece which would last the rest of their lives and, on several occasions, thwart Heaviside’s ambition to have his work accepted by mainstream researchers. When he applied to join the Society of Telegraph Engineers, he was rejected on the grounds that membership was not open to “clerks”. He saw the hand of Preece and his cronies at the Post Office behind this and eventually turned to William Thomson to back his membership, which was finally granted.

By 1874, telegraphy had become a big business and the work was increasingly routine. In 1870, the Post Office had taken over all domestic telegraph service in Britain and, as government is wont to do, largely stifled innovation and experimentation. Even at privately-owned international carriers like Oliver’s employer, operators were no longer concerned with the technical aspects of the work but rather tending automated sending and receiving equipment. There was little interest in the kind of work Oliver wanted to do: exploring the new horizons opened up by Maxwell’s work. He decided it was time to move on. So, he quit his job, moved back in with his parents in London, and opted for a life as an independent, unaffiliated researcher, supporting himself purely by payments for his publications.

With the duplex problem solved, the largest problem that remained for telegraphy was the slow transmission speed on long lines, especially submarine cables. The advent of the telephone in the 1870s would increase the need to address this problem. While telegraphic transmission on a long line slowed down the speed at which a message could be sent, with the telephone voice became increasingly distorted the longer the line, to the point where, after around 100 miles, it was incomprehensible. Until this was understood and a solution found, telephone service would be restricted to local areas.

Many of the early workers in electricity thought of it as something like a fluid, where current flowed through a wire like water through a pipe. This approximation is more or less correct when current flow is constant, as in a direct current generator powering electric lights, but when current is varying a much more complex set of phenomena become manifest which require Maxwell’s theory to fully describe. Pioneers of telegraphy thought of their wires as sending direct current which was simply switched off and on by the sender’s key, but of course the transmission as a whole was a varying current, jumping back and forth between zero and full current at each make or break of the key contacts. When these transitions are modelled in Maxwell’s theory, one finds that, depending upon the physical properties of the transmission line (its resistance, inductance, capacitance, and leakage between the conductors) different frequencies propagate along the line at different speeds. The sharp on/off transitions in telegraphy can be thought of, by Fourier transform, as the sum of a wide band of frequencies, with the result that, when each propagates at a different speed, a short, sharp pulse sent by the key will, at the other end of the long line, be “smeared out” into an extended bump with a slow rise to a peak and then decay back to zero. Above a certain speed, adjacent dots and dashes will run into one another and the message will be undecipherable at the receiving end. This is why operators on the transatlantic cables had to send at the painfully slow speed of eight words per minute.

In telephony, it’s much worse because human speech is composed of a broad band of frequencies, and the frequencies involved (typically up to around 3400 cycles per second) are much higher than the off/on speeds in telegraphy. The smearing out or dispersion as frequencies are transmitted at different speeds results in distortion which renders the voice signal incomprehensible beyond a certain distance.

In the mid-1850s, during development of the first transatlantic cable, William Thomson had developed a theory called the “KR law” which predicted the transmission speed along a cable based upon its resistance and capacitance. Thomson was aware that other effects existed, but without Maxwell’s theory (which would not be published in its final form until 1873), he lacked the mathematical tools to analyse them. The KR theory, which produced results that predicted the behaviour of the transatlantic cable reasonably well, held out little hope for improvement: decreasing the resistance and capacitance of the cable would dramatically increase its cost per unit length.

Heaviside undertook to analyse what is now called the transmission line problem using the full Maxwell theory and, in 1878, published the general theory of propagation of alternating current through transmission lines, what are now called the telegrapher’s equations. Because he took resistance, capacitance, inductance, and leakage all into account and thus modelled both the electric and magnetic field created around the wire by the changing current, he showed that by balancing these four properties it was possible to design a transmission line which would transmit all frequencies at the same speed. In other words, this balanced transmission line would behave for alternating current (including the range of frequencies in a voice signal) just like a simple wire did for direct current: the signal would be attenuated (reduced in amplitude) with distance but not distorted.

In an 1887 paper, he further showed that existing telegraph and telephone lines could be made nearly distortionless by adding loading coils to increase the inductance at points along the line (as long as the distance between adjacent coils is small compared to the wavelength of the highest frequency carried by the line). This got him into another battle with William Preece, whose incorrect theory attributed distortion to inductance and advocated minimising self-inductance in long lines. Preece moved to block publication of Heaviside’s work, with the result that the paper on distortionless telephony, published in The Electrician, was largely ignored. It was not until 1897 that AT&T in the United States commissioned a study of Heaviside’s work, leading to patents eventually worth millions. The credit, and financial reward, went to Professor Michael Pupin of Columbia University, who became another of Heaviside’s life-long enemies.

You might wonder why what seems such a simple result (which can be written in modern notation as the equation L/R = C/G) which had such immediate technological utlilty eluded so many people for so long (recall that the problem with slow transmission on the transatlantic cable had been observed since the 1850s). The reason is the complexity of Maxwell’s theory and the formidably difficult notation in which it was expressed. Oliver Heaviside spent nine years fully internalising the theory and its implications, and he was one of only a handful of people who had done so and, perhaps, the only one grounded in practical applications such as telegraphy and telephony. Concurrent with his work on transmission line theory, he invented the mathematical field of vector calculus and, in 1884, reformulated Maxwell’s original theory which, written in modern notation less cumbersome than that employed by Maxwell, looks like:

into the four famous vector equations we today think of as Maxwell’s.

These are not only simpler, condensing twenty equations to just four, but provide (once you learn the notation and meanings of the variables) an intuitive sense for what is going on. This made, for the first time, Maxwell’s theory accessible to working physicists and engineers interested in getting the answer out rather than spending years studying an arcane theory. (Vector calculus was independently invented at the same time by the American J. Willard Gibbs. Heaviside and Gibbs both acknowledged the work of the other and there was no priority dispute. The notation we use today is that of Gibbs, but the mathematical content of the two formulations is essentially identical.)

And, during the same decade of the 1880s, Heaviside invented the operational calculus, a method of calculation which reduces the solution of complicated problems involving differential equations to simple algebra. Heaviside was able to solve so many problems which others couldn’t because he was using powerful computational tools they had not yet adopted. The situation was similar to that of Isaac Newton who was effortlessly solving problems such as the brachistochrone using the calculus he’d invented while his contemporaries struggled with more cumbersome methods. Some of the things Heaviside did in the operational calculus, such as cancel derivative signs in equations and take the square root of a derivative sign made rigorous mathematicians shudder but, hey, it worked and that was good enough for Heaviside and the many engineers and applied mathematicians who adopted his methods. (In the 1920s, pure mathematicians used the theory of Laplace transforms to reformulate the operational calculus in a rigorous manner, but this was decades after Heaviside’s work and long after engineers were routinely using it in their calculations.)

Heaviside’s intuitive grasp of electromagnetism and powerful computational techniques placed him in the forefront of exploration of the field. He calculated the electric field of a moving charged particle and found it contracted in the direction of motion, foreshadowing the Lorentz-FitzGerald contraction which would figure in Einstein’s special relativity. In 1889 he computed the force on a point charge moving in an electromagnetic field, which is now called the Lorentz force after Hendrik Lorentz who independently discovered it six years later. He predicted that a charge moving faster than the speed of light in a medium (for example, glass or water) would emit a shock wave of electromagnetic radiation; in 1934 Pavel Cherenkov experimentally discovered the phenomenon, now called Cherenkov radiation, for which he won the Nobel Prize in 1958. In 1902, Heaviside applied his theory of transmission lines to the Earth as a whole and explained the propagation of radio waves over intercontinental distances as due to a transmission line formed by conductive seawater and a hypothetical conductive layer in the upper atmosphere dubbed the Heaviside layer. In 1924 Edward V. Appleton confirmed the existence of such a layer, the ionosphere, and won the Nobel prize in 1947 for the discovery.

Oliver Heaviside never won a Nobel Price, although he was nominated for the physics prize in 1912. He shouldn’t have felt too bad, though, as other nominees passed over for the prize that year included Hendrik Lorentz, Ernst Mach, Max Planck, and Albert Einstein. (The winner that year was Gustaf Dalén, “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys”—oh well.) He did receive Britain’s highest recognition for scientific achievement, being named a Fellow of the Royal Society in 1891. In 1921 he was the first recipient of the Faraday Medal from the Institution of Electrical Engineers.

Having never held a job between 1874 and his death in 1925, Heaviside lived on his irregular income from writing, the generosity of his family, and, from 1896 onward a pension of £120 per year (less than his starting salary as a telegraph operator in 1868) from the Royal Society. He was a proud man and refused several other offers of money which he perceived as charity. He turned down an offer of compensation for his invention of loading coils from AT&T when they refused to acknowledge his sole responsibility for the invention. He never married, and in his elder years became somewhat of a recluse and, although he welcomed visits from other scientists, hardly ever left his home in Torquay in Devon.

His impact on the physics of electromagnetism and the craft of electrical engineering can be seen in the list of terms he coined which are in everyday use: “admittance”, “conductance”, “electret”, “impedance”, “inductance”, “permeability”, “permittance”, “reluctance”, and “susceptance”. His work has never been out of print, and sparkles with his intuition, mathematical prowess, and wicked wit directed at those he considered pompous or lost in needless abstraction and rigor. He never sought the limelight and among those upon whose work much of our present-day technology is founded, he is among the least known. But as long as electronic technology persists, it is a monument to the life and work of Oliver Heaviside.

Mahon, Basil. The Forgotten Genius of Oliver Heaviside. Amherst, NY: Prometheus Books, 2017. ISBN 978-1-63388-331-4.

18+

Users who have liked this post:

## Electromagnetic Discovery May Demystify Quantum Mechanics

Here’s a press release from Q-Track on my discovery and publication… Hans

Physicists have long been troubled by the paradoxes and contradictions of quantum mechanics. Yesterday, a possible step forward appeared in the Philosophical Transactions of the Royal Society A. In a paper, “Energy velocity and reactive fields” [pay wall, free preprint], physicist Hans G. Schantz, presents a novel way of looking at electromagnetics that shows the deep tie between electromagnetics and the pilot wave interpretation of quantum mechanics.

Schantz offers a solution to wave-particle duality by arguing that electromagnetic fields and energy are distinct phenomena instead of treating them as two aspects of the same thing. “Fields guide energy” in Schantz’s view. “As waves interfere, they guide energy along paths that may be substantially different from the trajectories of the waves themselves.” Schantz’s entirely classical perspective appears remarkably similar to the “pilot-wave” theory of quantum mechanics.

Schantz’s approach to electromagnetic theory focuses on the balance between electric and magnetic energy. When there are equal amount of electric and magnetic energy, energy moves at the speed of light. As the ratio shifts away from an equal balance, energy slows down, coming to a rest in the limit of electrostatic or magnetic static fields. From this observation, Schantz derives a way to quantify the state of the electromagnetic field on a continuum between static and radiation fields, and ties this directly to the energy velocity.

“The fascinating result is that fields guide energy in a way exactly analogous to the way in which pilot waves guide particles in the Bohm-deBroglie theory,” Schantz explains. “Rather than an ad hoc approach to explain away the contradictions of quantum mechanics, pilot wave theory appears to be the natural application of classical electromagnetic ideas in the quantum realm.”

His solution to the “two slit” experiment that has perplexed generations of physicists?

“Fields behave like waves. When they interact with the two slits, they generate an interference pattern. The interference pattern guides a photon along a path to one of the screen. It’s not the photon interfering with itself. It’s the interfering waves guiding the photon.”

So which slit did the photon pass through?

“If the photon ends up on the left hand side of the screen, it went through the left slit. If it ends up on the right hand side of the screen, it went through the right slit. It really is that simple.”

Schantz applied these electromagnetic ideas to understand and explain how antennas work in his textbook, The Art and Science of Ultrawideband Antennas (Artech House 2015). He’s also co-founder and CTO of Q-Track Corporation, a company that applies near-field wireless to the challenging problem of indoor location. “There are things you can do with low-frequency long-wavelength signals that simply aren’t possible with conventional wireless systems,” Schantz explains. “Understanding how static or reactive energy transforms into radiation has direct applications to antenna design as well near-field wireless systems.”

Schantz chose an unconventional way of popularizing his ideas. “I was amazed that my electromagnetic perspective was not discovered and adopted over a hundred years ago. It was as if someone had deliberately suppressed the discovery, so I undertook to write a science fiction series based on that premise.” Schantz’s Hidden Truth series debuted in 2016, and he released the third volume in the series, The Brave and the Bold, in October.

Schantz’s next project is a popular treatment of his physics ideas. Edited by L. Jagi Lamplighter Wright, Schantz’s book Fields: The Once and Future Theory of Everything will appear in 2019.

12+

Users who have liked this post:

## Saturday Night Science: Orbits in Strongly Curved Spacetime

Your browser does not support HTML5 canvas.
Click the title of this post to see the interactive simulation.

## Introduction

The display above shows, from three different physical perspectives, the orbit of a low-mass test particle, the small red circle, around a non-rotating black hole (represented by a grey circle in the panel at the right), where the radius of the circle is the black hole’s gravitational radius, or event horizon. Kepler’s laws of planetary motion, grounded in Newton’s theory of gravity, state that the orbit of a test particle around a massive object is an ellipse with one focus at the centre of the massive object. But when gravitational fields are strong, as is the case for collapsed objects like neutron stars and black holes, Newton’s theory is inaccurate; calculations must be done using Einstein’s theory of General Relativity.

In Newtonian gravitation, an orbit is always an ellipse. As the gravitating body becomes more massive and the test particle orbits it more closely, the speed of the particle in its orbit increases without bound, always balancing the gravitational force. For a black hole, Newton’s theory predicts orbital velocities greater than the speed of light, but according to Einstein’s Special Theory of Relativity, no material object can achieve or exceed the speed of light. In strong gravitational fields, General Relativity predicts orbits drastically different from the ellipses of Kepler’s laws. This article allows you to explore them.

### The Orbit Plot

The panel at the right of the animation shows the test mass orbiting the black hole, viewed perpendicular to the plane of its orbit. The path of the orbit is traced by the green line. After a large number of orbits the display will get cluttered; just click the mouse anywhere in the right panel to erase the path and start over. When the test mass reaches its greatest distance from the black hole, a yellow line is plotted from the centre of the black hole to that point, the apastron of the orbit. In Newtonian gravity, the apastron remains fixed in space. The effects of General Relativity cause it to precess. You can see the degree of precession in the displacement of successive yellow lines (precession can be more than 360°; the yellow line only shows precession modulo one revolution).

### The Gravitational Effective-Potential

The two panels at the left of the animation display the orbit in more abstract ways. The Effective Potential plot at the top shows the position of the test mass on the gravitational energy curve as it orbits in and out. The summit on the left side of the curve is unique to General Relativity—in Newtonian gravitation the curve rises without bound as the radius decreases, approaching infinity at zero. In Einstein’s theory, the inability of the particle to orbit at or above the speed of light creates a “pit in the potential” near the black hole. As the test mass approaches this summit, falling in from larger radii with greater and greater velocity, it will linger near the energy peak for an increasingly long time, while its continued angular motion will result in more and more precession. If the particle passes the energy peak and continues to lesser radii, toward the left, its fate is sealed—it will fall into the black hole and be captured.

### The Gravity Well

Spacetime around an isolated spherical non-rotating uncharged gravitating body is described by Schwarzschild Geometry, in which spacetime can be thought of as being bent by the presence of mass. This creates a gravity well which extends to the surface of the body or, in the case of a black hole, to oblivion. The gravity well has the shape of a four-dimensional paraboloid of revolution, symmetrical about the central mass. Since few Web browsers are presently equipped with four-dimensional display capability, I’ve presented a two-dimensional slice through the gravity well in the panel at the bottom left of the animation. Like the energy plot above, the left side of the panel represents the centre of the black hole and the radius increases to the right. Notice that the test mass radius moves in lockstep on the Effective-Potential and Gravity Well charts, as the radius varies on the orbit plot to their right.

The gravity well of a Schwarzschild black hole has a throat at a radius determined solely by its mass—that is the location of the hole’s event horizon; any matter or energy which crosses the horizon is captured. The throat is the leftmost point on the gravity well curve, where the slope of the paraboloidal geometry becomes infinite (vertical). With sufficient angular momentum, a particle can approach the event horizon as closely as it wishes (assuming it is small enough so it isn’t torn apart by tidal forces), but it can never cross the event horizon and return.

### Hands On

By clicking in the various windows and changing values in the controls at the bottom of the window you can explore different scenarios. To pause the simulation, press the Pause button; pressing it again resumes the simulation. Click anywhere in the orbit plot at the right to clear the orbital trail and apastron markers when the screen becomes too cluttered. You can re-launch the test particle at any given radius from the black hole (with the same angular momentum) by clicking at the desired radius in either the Effective Potential or Gravity Well windows. The green line in the Effective Potential plot indicates the energy minimum at which a stable circular orbit exists for a particle of the given angular momentum.

The angular momentum is specified by the box at left in terms of the angular momentum per unit mass of the black hole, all in geometric units—all of this is explained in detail below. What’s important to note is that for orbits like those of planets in the Solar System, this number is huge; only in strong gravitational fields does it approach small values. If the angular momentum is smaller than a critical value ($$2\sqrt 3$$, about 3.464 for a black hole of mass 1, measured in the same units), no stable orbits exist; the particle lacks the angular momentum to avoid being swallowed. When you enter a value smaller than this, notice how the trough in the energy curve and the green line marking the stable circular orbit disappear. Regardless of the radius, any particle you launch is doomed to fall into the hole.

The Mass box allows you to change the mass of the black hole, increasing the radius of its event horizon. Since the shape of the orbit is determined by the ratio of the angular momentum to the mass, it’s just as easy to leave the mass as 1 and change the angular momentum. You can change the scale of all the panels by entering a new value for the maximum radius; this value becomes the rightmost point in the effective potential and gravity well plots and the distance from the centre of the black hole to the edge of the orbit plot. When you change the angular momentum or mass, the radius scale is automatically adjusted so the stable circular orbit (if any) is on screen.

## Kepler, Newton, and Beyond

In the early 17th century, after years of tedious calculation and false starts, Johannes Kepler published his three laws of planetary motion:

• First law (1605): A planet’s orbit about the Sun is an ellipse, with the Sun at one focus.
• Second law (1604): A line from the Sun to a planet sweeps out equal areas in equal times.
• Third law (1618): The square of the orbital period of a planet is proportional to the cube of the major axis of the orbit.

Kepler’s discoveries about the behaviour of planets in their orbits played an essential rôle in Isaac Newton’s formulation of the law of universal gravitation in 1687. Newton’s theory showed the celestial bodies were governed by the same laws as objects on Earth. The philosophical implications of this played as key a part in the Enlightenment as did the theory itself in the subsequent development of physics and astronomy.

While Kepler’s laws applied only to the Sun and planets, Newton’s universal theory allowed one to calculate the gravitational force and motion of any bodies whatsoever. To be sure, when many bodies were involved and great accuracy was required, the calculations were horrifically complicated and tedious—so much so that those reared in the computer age may find it difficult to imagine embarking upon them armed with nothing but a table of logarithms, pencil and paper, and the human mind. But performed they were, with ever greater precision as astronomers made increasingly accurate observations. And those observations agreed perfectly with the predictions of Newton’s theory.

Well,… almost perfectly. After painstaking observations of the planets and extensive calculation, astronomer Simon Newcomb concluded in 1898 that the orbit of Mercury was precessing 43 arc-seconds per century more than could be explained by the influence of the other planets. This is a tiny discrepancy, but further observations and calculations confirmed Newcomb’s—the discrepancy was real. Some suggested a still undiscovered planet closer to the Sun than Mercury (and went so far as to name it, sight unseen, “Vulcan”), but no such planet was ever found, nor any other plausible explanation advanced. For nearly twenty years Mercury’s precession or “perihelion advance” remained one of those nagging anomalies in the body of scientific data that’s trying to tell us something, if only we knew what.

In 1915, Albert Einstein’s General Theory of Relativity extended Newtonian gravitation theory, revealing previously unanticipated subtleties of nature. And Einstein’s theory explained the perihelion advance of Mercury. That tiny discrepancy in the orbit of Mercury was actually the first evidence for what lay beyond Newtonian gravitation, the first step down a road that would lead to understanding black holes, gravitational radiation, and the source of inertia, which remains a fertile ground for theoretical and experimental physics a century thereafter.

If we’re interested in the domain where general relativistic effects are substantial, we’re better off calculating with units scaled to the problem. A particularly convenient and elegant choice is the system of geometric units, obtained by setting Newton’s gravitational constant G, the speed of light c, and Boltzmann’s constant k all equal to 1. We can then express any of the following units as a length in centimetres by multiplying by the following conversion factors.

The enormous exponents make it evident that these units are far removed from our everyday experience. It would be absurd to tell somebody, “I’ll call you back in $$1.08\times 10^{14}$$ centimetres”, but it is a perfectly valid way of saying “one hour”. The discussion that follows uses geometric units throughout, allowing us to treat mass, time, length, and energy without conversion factors. To express a value calculated in geometric units back to conventional units, just divide by the value in the table above.

## The Gravitational Effective-Potential

The gravitational effective-potential for a test particle orbiting in a Schwarzschild geometry is:

where $$\tilde{L}$$ is the angular momentum per unit rest mass expressed in geometric units, M is the mass of the gravitating body, and r is the radius of the test particle from the centre of the body.

The radius of a particle from the centre of attraction evolves in proper time τ (time measured by a clock moving along with the particle) according to:

where $$\tilde{E}$$ is the potential energy of the test mass at infinity per rest mass.

Angular motion about the centre of attraction is then:

while time, as measured by a distant observer advances according to:

and can be seen to slow down as the event horizon at the gravitational radius is approached. At the gravitational radius of 2M time, as measured from far away, stops entirely so the particle never seems to reach the event horizon. Proper time on the particle continues to advance unabated; an observer on-board sails through the event horizon without a bump (or maybe not) and continues toward the doom which awaits at the central singularity.

### Circular Orbits

Circular orbits are possible at maxima and minima of the effective-potential. Orbits at minima are stable, since a small displacement increases the energy and thus creates a restoring force in the opposite direction. Orbits at maxima are unstable; the slightest displacement causes the particle to either be sucked into the black hole or enter a highly elliptical orbit around it.

To find the radius of possible circular orbits, differentiate the gravitational effective-potential with respect to the radius r:

The minima and maxima of a function are at the zero crossings of its derivative, so a little algebra gives the radii of possible circular orbits as:

The larger of these solutions is the innermost stable circular orbit, while the smaller is the unstable orbit at the maximum. For a black hole, this radius will be outside the gravitational radius at 2M, while for any other object the radius will be less than the diameter of the body, indicating no such orbit exists. If the angular momentum L² is less than 12M², no stable orbit exists; the object will impact the surface or, in the case of a black hole, fall past the event horizon and be swallowed.

## References

Gallmeier, Jonathan, Mark Loewe, and Donald W. Olson. “Precession and the Pulsar.” Sky & Telescope (September 1995): 86–88.
A BASIC program which plots orbital paths in Schwarzschild geometry. The program uses different parameters to describe the orbit than those used here, and the program does not simulate orbits which result in capture or escape. This program can be downloaded from the Sky & Telescope Web site.
Misner, Charles W., Kip S. Thorne, and John Archibald Wheeler. Gravitation. San Francisco: W. H. Freeman, 1973. ISBN 978-0-7167-0334-1.
Chapter 25 thoroughly covers all aspects of motion in Schwarzschild geometry, both for test particles with mass and massless particles such as photons.
Wheeler, John Archibald. A Journey into Gravity and Spacetime. New York: W. H. Freeman, 1990. ISBN 978-0-7167-5016-1.
This book, part of the Scientific American Library series (but available separately), devotes chapter 10 to a less technical discussion of orbits in Schwarzschild spacetime. The “energy hill” on page 173 and the orbits plotted on page 176 provided the inspiration for this page.

Here is a short video about orbiting a black hole:

This is a 45 minute lecture on black holes and the effects they produce.

8+

Users who have liked this post: