Expanding Earth — Clarity first

Two things that never occurred to me before about Pangea is that :

  1. That Pangea is normally shown to fit together from the continents only across the Atlantic Ocean. The video below claims that they can be seen to fit across the Pacific Ocean, also. See what you think.
  2. That mountain ranges could be explained by trying to fit the continent on a larger earth diameter. Think of an orange peel and imagine wrapping a large swath of it around a cantaloupe. The periphery of that peel would have to expand and cause large rifts and the center would bunch up and cause mountain ranges.

This theory has some history behind it, too. Evidently, the plate tectonics theory won out.

I like that someone went to the trouble of making a 3D simulation to explore this concept. It helps in the clarity part of new explanations. Here’s what you get with this idea:

  1. Fauna and flora species developed in the pre-expansion globe would be similar even across the ocean if the continents split a habitat. So, western Africa and eastern South America would show some of the same species, for example.
  2. One thing that has always puzzled me is how Antarctica could have evidence of temperate climate forests on it at one time. See here and here. The video gives an interesting explanation. And remember that being warm enough for these trees is not enough — the light cycle of the poles will not work for most flora. Much too dark for half of the year.

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar

Moments of Pure Joy

These kids and adults who can hear for the first time because of cochlear implants. Hope you enjoy their happiness as much as I did.

Like 11+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

TOTD 2018-06-08: An End to Downs Syndrome?

I understand the concern about the increase in abortion of Downs Syndrome fetuses. That clearly has all kinds of ethical problems, and opens a giant can of worms – what genetic abnormalities get the axe?  That’s not what I am talking about here.  This is about something different. Continue reading “TOTD 2018-06-08: An End to Downs Syndrome?”


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar

The Genome Politics – Let the Laymen Talk (Experts are quite welcome, too)

@drlorentz put up a post called Leftism as Religion that I highly endorse. I started a branching thread within that about how I see Darwinism as being driven by similar (and very natural, of course) thinking.

Here’s what I said:

This matches the left’s penchant for Neo-Darwinism, too. At least, the version with all the happy talk about how things always just randomly, accidentally just get better and smarter and more complex and more ordered.

Followed by the doc’s:

I’m not familiar with such a version. I don’t think any biologist would agree with that characterization. Don’t confuse biological fitness with improvement. Cockroaches are  neither especially smart nor complex but very fit biologically . The evidence is that they are c. 100 Myrs old and essentially unchanged in that time. Ferns are even older and simpler. Neither species seemed to require improvement.

You’re Cathy-Newman-ing.

Me again:

You’re the first person I’ve heard who says things aren’t evolving – just staying the same. Odd.

It seems that cockroaches had to come from something, right? Was evolution not involved in the process that got us cockroaches?

And then this completely ridiculous comment in reply to my perfectly reasonable statement. (Hey, it’s my post.)

More Cathy Newman. Never said nothing is evolving. Some things are not evolving. See the difference?  Cockroaches have not changed significantly in a long time. I quote myself:

drlorentz:
The evidence is that they are c. 100 Myrs old and essentially unchanged in that time.

That doesn’t mean evolution was not involved before then. Emphasis on “in that time.” The Earth is about 5E9 years old: 50 times longer.

“So you’re saying…”

Edit: Please note the context of the original comment. It was in response to the assertion that biologists claim that “…things always just randomly, accidentally just get better and smarter and more complex and more ordered.” This assertion is manifestly false. Counterexamples were provided.

So, now we are up to date.

I think where we are presently differing is on the issue of what I meant by “always” (see immediately above). Always, to me, means that there is always pressure on the genome to change. For example, biologists tell us that cosmic radiation can cause changes to the DNA at the base pair level. This just means that it happens on a single rung of the DNA helix.

What drlorentz has noted above is that even if this is going on the species isn’t being changed. True, but that’s because there is a Spell Checker. This is my understanding as to why some parts of the genome are very stable over a long time. Either way (from the above link): “The difference is not in the number of new mutations but in the mechanism that keeps these mutations under control.” Cockroaches and sharks have locked the genome down evidently.

I would like to stop there and let the iterations begin. There’s no reason to go further on this until we are all on the same page.

[Background: drlorentz and I have met — it was at the Reagan Library Meetup with Peter Robinson and Pat Sajak. I consider him a good friend. If it seems that we are angry let me assure you all that this isn’t true. I’m completely comfortable with him and I’m quite sure that he and I will keep the heat to the medium level.

Also, @johnwalker and I have had many run-ins on scientific issues over the years and yet he is always cordial and gentlemanly to me personally.]


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: Project Cyclops

Project Cyclops: Full arrayThere are few questions in science as simple to state and profound in their implications as “are we alone?”—are humans the only species with a technological civilisation in the galaxy, or in the universe?  This has been a matter of speculation by philosophers, theologians, authors of fiction, and innumerable people gazing at the stars since antiquity, but it was only in the years after World War II, which had seen the development of high-power microwave transmitters and low-noise receivers for radar, that it dawned upon a few visionaries that this had now become a question which could be scientifically investigated.

The propagation of radio waves through the atmosphere and the interstellar medium is governed by basic laws of physics, and the advent of radio astronomy demonstrated that many objects in the sky, some very distant, could be detected in the microwave spectrum.  But if we were able to detect these natural sources, suppose we connected a powerful transmitter to our radio telescope and sent a signal to a nearby star?  It was easy to calculate that, given the technology of the time (around 1960), existing microwave transmitters and radio telescopes could transmit messages across interstellar distances.

But, it’s one thing to calculate that intelligent aliens with access to microwave communication technology equal or better than our own could communicate over the void between the stars, and entirely another to listen for those communications.  The problems are simple to understand but forbidding to face: where do you point your antenna, and where do you tune your dial?  There are on the order of a hundred billion stars in our galaxy.  We now know, as early researchers suspected without evidence, that most of these stars have planets, some of which may have conditions suitable for the evolution of intelligent life.  Suppose aliens on one of these planets reach a level of technological development where they decide to join the “Galactic Club” and transmit a beacon which simply says “Yo!  Anybody out there?”  (The beacon would probably announce a signal with more information which would be easy to detect once you knew where to look.)  But for the beacon to work, it would have to be aimed at candidate stars where others might be listening (a beacon which broadcasted in all directions—an “omnidirectional beacon”—would require so much energy or be limited to such a short range as to be impractical for civilisations with technology comparable to our own).

Then there’s the question of how many technological communicating civilisations there are in the galaxy.  Note that it isn’t enough that a civilisation have the technology which enables it to establish a beacon: it has to do so.  And it is a sobering thought that more than six decades after we had the ability to send such a signal, we haven’t yet done so.  The galaxy may be full of civilisations with our level of technology and above which have the same funding priorities we do and choose to spend their research budget on intersectional autoethnography of transgender marine frobdobs rather than communicating with nerdy pocket-protector types around other stars who tediously ask Big Questions.

And suppose a civilisation decides it can find the spare change to set up and operate a beacon, inviting others to contact it.  How long will it continue to transmit, especially since it’s unlikely, given the finite speed of light and the vast distances between the stars, there will be a response in the near term?  Before long, scruffy professors will be marching in the streets wearing frobdob hats and rainbow tentacle capes, and funding will be called into question.  This is termed the “lifetime” of a communicating civilisation, or L, which is how long that civilisation transmits and listens to establish contact with others.  If you make plausible assumptions for the other parameters in the Drake equation (which estimates how many communicating civilisations there are in the galaxy), a numerical coincidence results in the estimate of the number of communicating civilisations in the galaxy being roughly equal to their communicating life in years, L.  So, if a typical civilisation is open to communication for, say, 10,000 years before it gives up and diverts its funds to frobdob research, there will be around 10,000 such civilisations in the galaxy.  With 100 billion stars (and around as many planets which may be hosts to life), that’s a 0.00001% chance that any given star where you point your antenna may be transmitting, and that has to be multiplied by the same probability they are transmitting their beacon in your direction while you happen to be listening.  It gets worse.  The galaxy is huge—around 150 million light years in diameter, and our technology can only communicate with comparable civilisations out to a tiny fraction of this, say 1000 light years for high-power omnidirectional beacons, maybe ten to a hundred times that for directed beacons, but then you have the constraint that you have to be listening in their direction when they happen to be sending.

It seems hopeless.  It may be.  But the 1960s were a time very different from our constrained age.  Back then, if you had a problem, like going to the Moon in eight years, you said, “Wow!  That’s a really big nail.  How big a hammer do I need to get the job done?”  Toward the end of that era when everything seemed possible, NASA convened a summer seminar at Stanford University to investigate what it would take to seriously investigate the question of whether we are alone.  The result was Project Cyclops: A Design Study of a System for Detecting Extraterrestrial Intelligent Life, prepared in 1971 and issued as a NASA report (no Library of Congress catalogue number or ISBN was assigned) in 1973; the link will take you to a NASA PDF scan of the original document, which is in the public domain.  The project assembled leading experts in all aspects of the technologies involved: antennas, receivers, signal processing and analysis, transmission and control, and system design and costing.

Project Cyclops: Ground level viewThey approached the problem from what might be called the “Apollo perspective”: what will it cost, given the technology we have in hand right now, to address this question and get an answer within a reasonable time?  What they came up with was breathtaking, although no more so than Apollo.  If you want to listen for beacons from communicating civilisations as distant as 1000 light years and incidental transmissions (“leakage”, like our own television and radar emissions) within 100 light years, you’re going to need a really big bucket to collect the signal, so they settled on 1000 dishes, each 100 metres in diameter.  Putting this into perspective, 100 metres is about the largest steerable dish anybody envisioned at the time, and they wanted to build a thousand of them, densely packed.

But wait, there’s more.  These 1000 dishes were not just a huge bucket for radio waves, but a phased array, where signals from all of the dishes (or a subset, used to observe multiple targets) were combined to provide the angular resolution of a single dish the size of the entire array.  This required breathtaking precision of electronic design at the time which is commonplace today (although an array of 1000 dishes spread over 16 km would still give most designers pause).  The signals that might be received would not be fixed in frequency, but would drift due to Doppler shifts resulting from relative motion of the transmitter and receiver.  With today’s computing hardware, digging such a signal out of the raw data is something you can do on a laptop or mobile phone, but in 1971 the best solution was an optical data processor involving exposing, developing, and scanning film.  It was exquisitely clever, although obsolete only a few years later, but recall the team had agreed to use only technologies which existed at the time of their design.  Even more amazing (and today, almost bizarre) was the scheme to use the array as an imaging telescope.  Again, with modern computers, this is a simple matter of programming, but in 1971 the designers envisioned a vast hall in which the signals from the antennas would be re-emitted by radio transmitters which would interfere in free space and produce an intensity image on an image surface where it would be measured by an array of receiver antennæ.

What would all of this cost?  Lots—depending upon the assumptions used in the design (the cost was mostly driven by the antenna specifications, where extending the search to shorter wavelengths could double the cost, since antennas had to be built to greater precision) total system capital cost was estimated as between 6 and 10 billion dollars (1971).  Converting this cost into 2018 dollars gives a cost between 37 and 61 billion dollars.  (By comparison, the Apollo project cost around 110 billion 2018 dollars.)  But since the search for a signal may “almost certainly take years, perhaps decades and possibly centuries”, that initial investment must be backed by a long-term funding commitment to continue the search, maintain the capital equipment, and upgrade it as technology matures.  Given governments’ record in sustaining long-term efforts in projects which do not line politicians’ or donors’ pockets with taxpayer funds, such perseverance is not the way to bet.  Perhaps participants in the study should have pondered how to incorporate sufficient opportunities for graft into the project, but even the early 1970s were still an idealistic time when we didn’t yet think that way.

This study is the founding document of much of the work in the Search for Extraterrestrial Intelligence (SETI) conducted in subsequent decades.  Many researchers first realised that answering this question, “Are we alone?”, was within our technological grasp when chewing through this difficult but inspiring document.  (If you have an equation or chart phobia, it’s not for you; they figure on the majority of pages.)  The study has held up very well over the decades.  There are a number of assumptions we might wish to revise today (for example, higher frequencies may be better for interstellar communication than were assumed at the time, and spread spectrum transmissions may be more energy efficient than the extreme narrowband beacons assumed in the Cyclops study).

Despite disposing of wealth, technological capability, and computing power of which authors of the Project Cyclops report never dreamed, we only make little plans today.  Most readers of this post, in their lifetimes, have experienced the expansion of their access to knowledge in the transition from being isolated to gaining connectivity to a global, high-bandwidth network.  Imagine what it means to make the step from being confined to our single planet of origin to being plugged in to the Galactic Web, exchanging what we’ve learned with a multitude of others looking at things from entirely different perspectives.  Heck, you could retire the entire capital and operating cost of Project Cyclops in the first three years just from advertising revenue on frobdob videos!  (Did I mention they have very large eyes which are almost all pupil?  Never mind the tentacles.)

Oliver, Bernard M., John Billingham, et al.  Project Cyclops [PDF].  Stanford, CA: Stanford/NASA Ames Research Center, 1971.  NASA-CR-114445 N73-18822.

This document has been subjected to intense scrutiny over the years.  The SETI League maintains a comprehensive errata list for the publication.

Here is a recent conversation among SETI researchers on the state of the art and future prospects for SETI with ground-based telescopes.

This is a two part lecture on the philosophy of the existence and search for extraterrestrial beings from antiquity to the present day.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

This Week’s Book Review – Ignition!: An Informal History of Liquid Propellants

I write a weekly book review for the Daily News of Galveston County. (It is not the biggest daily newspaper in Texas, but it is the oldest.) My review normally appears Wednesdays. When it appears, I post the review here on the following Sunday.

Seawriter

Book Review

‘Ignition!’ explores the ‘golden age’ of rocketry

By MARK LARDAS

May 22, 2018

Ignition!: An Informal History of Liquid Propellants,” by John D. Clark, Rutgers University Press Classics, 2018, 302 pages, $24.95

Today, rocket science commonly refers to anything dealing with space. Originally, it meant rocket design, especially fuel development.

“Ignition!: An Informal History of Liquid Propellants,” by John D. Clark, harks back to those day. While informal, it is a comprehensive account of rocket fuel development.

In “Ignition!” Clark reveals what went on behind the scenes in the early days of rocketry. He was the perfect man to do so. A pioneer rocket scientist, an active chemist from the early 1930s, between 1949 and 1970 he was one of the leading developers of liquid rocket fuels. A talented writer (publishing science fiction in 1930) he knew all the players, inside and outside the United States.

Clark shows what made rocket science challenging is not that it is difficult. It is that rocket fuels are very finicky. Do anything wrong and the rocket does not go whoosh. It goes boom.

Clark shows all the ways they go boom. He explains what makes a good rocket fuel, shows readers what works and shows readers what does not work and why. He starts with Tsiolkovsky in the late 1800s, and ending with the Saturn V and the moon missions in the late 1960s.

His focus is on the golden age of rocket fuel development, from 1946 through 1961. Those years saw development of the liquid fuels still used in rockets today, with a lot of dead ends. Clark spends chapters on the dead ends, such as peroxide fuels and monopropellants. Frequently those chapters are books’ most entertaining.

There is chemistry involved, including formidable chemical equations. Readers unfamiliar with chemistry should skip them. They are for the chemistry geeks reading the book. Between the equations are what makes the book entertaining; the technician attacked by bats after a fuel test, the propellant developer who took a year off to develop hula hoops and many similar stories.

“Ignition!,” originally written in 1972, is back in print after a long hiatus. A classic book, it tells a rollicking story of an era when space was the frontier. An informative history, it reads like an adventure story.

Mark Lardas, an engineer, freelance writer, amateur historian, and model-maker, lives in League City. His website is marklardas.com.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Where Good Ideas Don’t Come From

Following up on Civil Westman’s excellent review of Simon Winchester’s book, here’s my Amazon review:

This is a great book. Simon Winchester has a wonderful writing style and is a skillful storyteller. His story is an important one due to the importance of precision instruments in our society. That said, there are a fair numbers of small errors and one more major one in the book.

He states that, “Jefferson, while U.S. minister to France…told his superiors in Washington”. Jefferson was minister in France from 1785 to 1789. The act creating a capital district along the Potomac River was signed in July 1790. There was no Washington, D.C. when Jefferson was minister in France.

There are lots of small problems in chapter 8 which discusses GPS. I’m the coauthor of “GPS Declassified: From Smart Bombs to Smartphones” which is included in his bibliography. https://www.amazon.com/GPS-Declassified-Smart-Bombs-Smartphones/dp/1612344089

He states that Roger Easton, my dad, came up with the idea of using clocks in satellites for a passive ranging navigation system in 1973. In reality, the idea came from a conversation with Dr. Arnold Shostak, father of SETI researcher Seth Shostak, in 1964. He states, “Roger Easton, who at the time worked for the U.S. Navy’s then –named Space Applications Branch in the Rio Grande Valley of South Texas.” Dad worked his whole Naval Research Lab career at its main office in Washington, D.C. The South Texas fence was a separate radar fence that was intended to be an adjunct to the primary Space fence so that an object’s orbit could be calculated on a single penetration of the two fences. Dad was there in September 1964 trying to synchronize the clocks in the two stations in this fence. He realized that a clock in a satellite could do this and later saw that it could also be used for navigation (following up on his April 1964 conversation with Dr. Shostak referenced above).

Winchester then describes the car experiment which showed that passive ranging with clocks would work except he places it in Texas whereas it was in D.C. “The other he kept at the naval station in which he was working in South Texas. While the observers were watching the oscilloscope screens he had hooked up in the lab, he ordered Maloof to drive the car as far and fast as possible down a road, Texas Route 295, which was unfinished at the time and thus empty.”

He’s describing the experiment which occurred on October 16, 1964. See page 9 from the “NRL GPS Bibliography – An Annotated Bibliography of the Origin and Development of the Global Position System at the Naval Research Laboratory” which states, “Easton’s passive ranging concept is demonstrated using a side-tone ranging receiver, modified from the South Texas experiment and placed at NRL, and a transmitter in Matt Maloof’s convertible as he drives it down the I-295 interstate. The road is finished but not yet opened to the public. Two Bureau of Naval Weapons representatives, John Yob and Chester Kleczek, observe the experiment.”

I-295 is the highway next to NRL’s headquarters in Washington, D.C. It’s not a state road in Texas. There is a reference to a South Texas experiment in the above account, but the test with Maloof’s convertible was in the D.C. area. In a 1996 interview with my Dad, they refer to the Wilson bridge across the Potomac in relation to the experiment. Wikipedia states that, “The first 7.8 miles (12.6 km) of the route opened on August 7, 1964 when the connecting segment of the Capital Beltway opened.” This fits in with an October 16th test.

The most significant error is his assertion that Reagan opened GPS to civilian uses after the shooting down of KAL 007 in 1983. This mistake is common in the literature. However, GPS was a dual use military-civilian system from day 1. The NAVSTAR Global Positioning System Program Management Plan 15 July 1974 can be found under resources on my website (gpsdeclassified). On page 2-9, it states that, “The C/A Signal will serve as an aid to the acquisition of the P Signal, and will also provide a navigation signal in the clear to both the military and civil user.” Texas Instruments was making in 1981 the TI-4100 NAVSTAR Navigator GPS Receiver for commercial users. Thus, civilian use was built into GPS in 1974 and a civilian receiver was being sold in 1981. I highly recommend this book in spite of these minor errors.

Note that Amazon does not allow external links.  I could add provide them if anyone’s interested.  One of the problems with Winchester’s account of GPS is that he uses Steven Johnson’s book Where Good Ideas Come From as a source.  Johnson has a poor grasp of the history of GPS.  Here’s a terrible TED talk he gave on the subject:

Based on my review, what are some of the mistakes he makes in this talk.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

The Pleasure of Finding Things Out

That is the title of an anthology of short works by Richard Feynman. However, this is not a book review; it’s an oblique reference to a recent Scott Adams Periscope (starting at about 11:00 at this link) in which he explained how people in engineering and technical fields* differ from those in other fields, specifically journalists. His launching-off point for the discussion is a small study of journalists that, as he points out, needs to be replicated before it’s taken seriously. But since it confirms his bias and gratifies my ego, we’re both happy to accept the result as valid.

The claim is that people in technical fields have two reactions to being told they are wrong:

  1. Annoyed to be wrong.
  2. Excited to be wrong.

Everyone shares the first response. No one likes to be shown that he’s wrong. According to Adams, journalists respond with anger when this happens. The second response is not common to all. It’s exciting to be wrong because it is a learning opportunity. If you were wrong about something and someone sets you right, you know more than you did before. Adams says he’s seen engineers “change their minds in real time” when presented with contrary evidence, which he finds extraordinary.

It’s hard for those who don’t frequently experience being wrong to understand the thrill. Nature tells me I’m wrong on a daily basis. If not nature, then colleagues often straighten me out. Even though the following incident happened over ten years ago, the memory is still vivid. A colleague and I were discussing the causes of turbulence in the lower atmosphere outside the office of a third. This third person had plenty of experience with turbulence from years of hang gliding. Of course, he was right and we were wrong. The conversation went something like this:

him: If you guys are done BS-ing about this, do you want to hear the real reasons?

us: Umm, ok.

him: [detailed discussion of topographic and thermal effects]

us: Oh…

I simultaneously hate and love being wrong.


*Adams includes physicians and lawyers in this group: people who use rational thinking in their work.

Like 10+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Book Review: Real Magic

“Real Magic” by Dean RadinFrom its beginnings in the 19th century as “psychical research”, there has always been something dodgy and disreputable about parapsychology: the scientific study of phenomena, frequently reported across all human cultures and history, such as clairvoyance, precognition, telepathy, communication with the dead or non-material beings, and psychokinesis (mental influence on physical processes). All of these disparate phenomena have in common that there is no known physical theory which can explain how they might work. In the 19th century, science was much more willing to proceed from observations and evidence, then try to study them under controlled conditions, and finally propose and test theories about how they might work. Today, many scientists are inclined to put theory first, rejecting any evidence of phenomena for which no theory exists to explain it.

In such an intellectual environment, those who study such things, now called parapsychologists, have been, for the most part, very modest in their claims, careful to distinguish their laboratory investigations, mostly involving ordinary subjects, from extravagant reports of shamans and psychics, whether contemporary or historical, and scrupulous in the design and statistical analysis of their experiments. One leader in the field is Dean Radin, author of the present book, and four times president of the Parapsychological Association, a professional society which is an affiliate of the American Association for the Advancement of Science. Dr. Radin is chief scientist at the Institute of Noetic Sciences in Petaluma, California, where he pursues laboratory research in parapsychology. In his previous books, including Entangled Minds, he presents the evidence for various forms of human perception which seem to defy conventional explanation. He refrains from suggesting mechanisms or concluding whether what is measured is causation or correlation. Rather, he argues that the body of accumulated evidence from his work and that of others, in recent experiments conducted under the strictest protocols to eliminate possible fraud, post-selection of data, and with blinding and statistical rigour which often exceed those of clinical trials of pharmaceuticals, provides evidence that “something is going on” which we don’t understand that would be considered discovery of a new phenomenon if it originated in a “hard science” field such as particle physics.

Here, Radin argues that the accumulated evidence for the phenomena parapsychologists have been studying in the laboratory for decades is so persuasive to all except sceptics who no amount of evidence would suffice to persuade, that it is time for parapsychologists and those interested in their work to admit that what they’re really studying is magic. “Not the fictional magic of Harry Potter, the feigned magic of Harry Houdini, or the fraudulent magic of con artists. Not blue lightning bolts springing from the fingertips, aerial combat on broomsticks, sleight-of-hand tricks, or any of the other elaborations of artistic license and special effects.” Instead, real magic, as understood for millennia, which he divides into three main categories:

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

As Radin describes, it was only after years of work in parapsychology that he finally figured out why it is that, while according to a 2005 Gallup pool, 75% of people in the United States believe in one or more phenomena considered “paranormal”, only around 0.001% of scientists are engaged in studying these experiences. What’s so frightening, distasteful, or disreputable about them? It’s because they all involve some kind of direct interaction between human consciousness and the objective, material world or, in other words magic. Scientists are uncomfortable enough with consciousness as it is: they don’t have any idea how it emerges from what, in their reductionist models, is a computer made of meat, to the extent that some scientists deny the existence of consciousness entirely and dismiss it as a delusion. (Indeed, studying the origin of consciousness is almost as disreputable in academia as parapsychology.)

But if we must admit the existence of this mysterious thing called consciousness, along with other messy concepts such as free will, at least we must keep it confined within the skull: not roaming around and directly perceiving things far away or in the future, affecting physical events, or existing independent of brains. That would just be too weird.

And yet most religions, from those of traditional societies to the most widely practiced today, include descriptions of events and incorporate practices which are explicitly magical according to Radin’s definition. Paragraphs 2115–2117 of the Catechism of the Roman Catholic Church begin by stating that “God can reveal the future to his prophets or to other saints.” and then go on to prohibit “Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums…”. But if these things did not exist, or did not work, then why would there be a need to forbid them? Perhaps it’s because, despite religion’s incorporating magic into its belief system and practices, it also wishes to enforce a monopoly on the use of magic among its believers—in Radin’s words, “no magic for you!

In fact, as stated at the beginning of chapter 4, “Magic is to religion as technology is to science.” Just as science provides an understanding of the material world which technology applies in order to accomplish goals, religion provides a model of the spiritual world which magic provides the means to employ. From antiquity to the present day, religion and magic have been closely associated with one another, and many religions have restricted knowledge of their magical components and practices to insiders and banned others knowing or employing them. Radin surveys this long history and provides a look at contemporary, non-religious, practice of the three categories of real magic.

He then turns to what is, in my estimation, the most interesting and important part of the book: the scientific evidence for the existence of real magic. A variety of laboratory experiments, many very recent and with careful design and controls, illustrate the three categories and explore subtle aspects of their behaviour. For example, when people precognitively sense events in the future, do they sense a certain event which is sure to happen, or the most probable event whose occurrence might be averted through the action of free will? How on Earth would you design an experiment to test that? It’s extremely clever, and the results are interesting and have deep implications.

If ordinary people can demonstrate these seemingly magical powers in the laboratory (albeit with small, yet statistically highly significant effect sizes), are there some people whose powers are much greater? That is the case for most human talents, whether athletic, artistic, or intellectual; one suspects it might be so here. Historical and contemporary evidence for “Merlin-class magicians” is reviewed, not as proof for the existence of real magic, but as what might be expected if it did exist.

What is science to make of all of this? Mainstream science, if it mentions consciousness at all, usually considers it an emergent phenomenon at the tip of a pyramid of more fundamental sciences such as biology, chemistry, and physics. But what if we’ve got it wrong, and consciousness is not at the top but the bottom: ultimately everything emerges from a universal consciousness of which our individual consciousness is but a part, and of which all parts are interconnected? These are precisely the tenets of a multitude of esoteric traditions developed independently by cultures all around the world and over millennia, all of whom incorporated some form of magic into their belief systems. Maybe, as evidence for real magic emerges from the laboratory, we’ll conclude they were on to something.

This is an excellent look at the deep connections between traditional beliefs in magic and modern experiments which suggest those beliefs, however much they appear to contradict dogma, may be grounded in reality. Readers who are unacquainted with modern parapsychological research and the evidence it has produced probably shouldn’t start here, but rather with the author’s earlier Entangled Minds, as it provides detailed information about the experiments, results, and responses to criticism of them which are largely assumed as the foundation for the arguments here.

Radin, Dean. Real Magic. New York: Harmony Books, 2018. ISBN 978-1-5247-5882-0.

Here is a one hour interview with the author about the book and the topics discussed therein.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Wikipedia GPS entry

I resolved to take a hiatus from GPS, lest I bore all of you to death, but the Winchester book and John W’s post about Einstein (yes, blame it on others) have caused me to break my resolution.  John referenced the Wikipedia entry on GPS.  It states:

The GPS project was launched by the U.S. Department of Defense in 1973 for use by the United States military and became fully operational in 1995. It was allowed for civilian use in the 1980s.

This is incorrect.  GPS was dual military-civilian use from day 1.  I don’t think I’ve discussed this on Ratburger but my memory’s not what it used to be.  Forgive me if I’m repeating myself.  On my website we have the NAVSTAR GPS Program Management Plan, July 15, 1974.  Look at 2-9 (pg 30 of pdf):

The C/A Signal will serve as an aid to the acquisítíon of the P Signal, ¡
and will also provide a navigation signal in the clear to both the
military and civil user.

Contrary to a myth that’s constantly repeated by journalists, GPS was a dual military-civilian use system from day 1.  I working on an article about this which I hope to get published in the next couple of months.  And that’s the way it is.

Like 12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar