Catholic Distress

Since the Church of Rome is America’s largest faith group, everyone ought to have a little understanding about the important parts of the scandals that are rocking the Catholic Church. This is a big deal that will affect most of the culture wars and will spill over into politics. Of course, Big Media won’t cover any part of this except the parts that affect politics, and they can be relied on to bury the parts that embarrass the Left. So, here is a long post by a schismatic Lutheran to explain some of the distress in the Catholic Church.

The U.S. Conference of Catholic Bishops scheduled a conference last November, intended to get serious about setting up a monitoring and checks and balances system to address the sexual scandals in their own ranks. The lay Catholics, with the whole world watching, had been catching on that there were priests who had committed sexual sins and who had disappeared with some vague words about penance. Then it started coming out that bad priests had simply been moved around, in most cases with their new diocese unaware of their sexually sinful past. It turned out that this had happened in an astonishingly large number of cases, which became clear when the Pennsylvania Attorney General released a long and damning report on an investigation into sexual sins by priests. What really frosted the lay Catholics was that bishops who had preached about the need for openness and clarity and penance and oversight and confession and such, turned out to be the men who had deliberately hid the bad cases, covered for them, and in some cases put sexual predator priests into positions where they could repeat their bad behaviors.

If you are a Catholic who has been paying attention to this stuff, then scroll down to the heading called GetReligion. Most of the next couple of pages is background info.

American bishops put on hold

The USCCB assembly met two months ago with the expectation that they were going to vote on two action items. These were standards of accountability for bishops and a special commission for receiving complaints against bishops. This was part of several initiatives intended to work out a process for improved monitoring/oversight on matters of ecclesial discipline, to make sure that penances were concomitant with the infraction, that real crimes were promptly reported to police, and that violators who were likely to repeat their offense did not get placed into circumstances with future temptations, and that priests so placed would be subject to follow-up counseling and monitoring.

But an odd thing happened when the conference opened. The first thing was that a surprise letter from Pope Francis was read that told them to take no action on the topic. Pope Francis, you see, is planning a super conference in February that will be a global conference to take up such issues for all the world’s Catholic bishops. It would not do for the Americans to get out in front.

A letter that had gone from the Vatican to the USCCB President, Cardinal Daniel DiNardo, was recently leaked to the Associated Press. It reveals that DiNardo had been slow to provide information to the Vatican, and this made the Vatican position easier to justify. It makes sense that the Vatican would want more time to consider what the Americans were up to, since it could have ripple effects for ecclesial canon law throughout the Catholic world.

Scandal of 2002 – 2005

In the late 1980s there were a couple of scandals involving sexual sins by predator priests. A couple more were made public in the early 1990s, generating some media buzz. It was a great excuse for some Catholic bashing and some Christian bashing. This scandal got mashed up with other scandals, such as revelations about poor treatment of indigenous peoples by Catholic missions, and a general apology for past bad behaviors was made for several scandals by Pope John Paul II in 2001.

Then a blockbuster scandal became a media sensation following an exposé by the Boston Globe in 2002. They produced a long special report on over a hundred victims of one bad priest, plus other victims of sexual misconduct by other Boston area priests. They followed that up with a survey of sexual scandals involving Catholic priests from all over the world. It sparked a media feeding frenzy that kept the Catholic Church in the spotlight for three years.

After three years of solid media attention, the issue went away. It just vanished. It was only a few of us highly-engaged culture warriors who figured out why. Stay tuned.

Pope Francis, friend of homosexuals

Pope Benedict XVI abdicated his position in early 2013. Cardinal Bergoglio of Argentina was elevated to the “Throne of St. Peter” that year. He has given conservatives indigestion ever since. Of particular interest to this story is his friendly and accommodating posture towards homosexuality. You just knew we were going to get around to homosexual priests, didn’t you?

Pope Francis is famously squishy when it comes to traditional doctrines on all sorts of social matters. This has won him favorable treatment by mass media (cover of Rolling Stone, Time “Person of the Year 2013,” etc.). One of the first big instances of his papacy involved a throw-away line he said to a clutch of reporters on his plane as he returned to the Vatican from World Catholic Youth Day 2013. Pope Francis said (speaking about priests) “If someone is gay and seeks the Lord and has good will, who am I to judge that person?” The media exploded with awful distortions.

There were some really unfortunate media accounts predicting that Pope Francis would reverse thousands of years of doctrinal positions on homosexuality and other sexual sins. Pope Francis coyly rebuked some of the excess. Subsequently he has done some things that made some observers call him a “homophile.” For example, in 2017 he appointed Archbishop Vincenzo Paglia as President of the Pontifical Pope John Paul II Institute for Studies on Marriage and Family. That is not so much a scandal as an opportunity for more whispering.

Scandals of 2018

In April and October last year, Pope Francis made a trip to Chile for some damage control involving sex scandals. He de-frocked (“laicized”) four priests. He spoke with a homosexual man who had been a victim of a predator priest. Pope Francis said some happy-talk things to soothe the man, and this got reported and caused a little dust-up within the ranks of conservative Catholic bloggers. Since that is not America, you probably never heard about it. I only bring it up to note that the Catholic Church has problems with homosexual priests all over the world.

In America, two separate sex scandals rocked the Catholic summer.

Cardinal Theodore McCarrick, age 87, was removed from priestly office last June by Pope Francis because of charges of sexual misconduct. A priest was being tried for sexual misconduct. He was defending himself by saying that he had been introduced to sex and sexual predatory ways by McCarrick back in the early 1970s when McCarrick was serving as secretary to Cardinal Cooke and the victim was a boy of 17. Then a couple of priests said that other people had accused McCarrick of sexual misconduct, and that they had been kept silent with settlement awards.

Then it turned out that it had long been known that McCarrick had been a sexual predator for a very long time. He had been pressing seminarians for sexual favors for decades. Then it came out that Pope Benedict XVI had found out about McCarrick and had instructed him to remove himself from priestly duties, from involvement with seminaries, and from public appearances. But McCarrick had ignored his instructions and then was “rehabilitated” by Pope Francis.

This really blew up in mid-July when the New York Times dug into McCarrick:

In 2000, Pope John Paul II promoted Archbishop McCarrick to lead the Archdiocese of Washington D.C., one of the most prestigious posts in the Catholic Church in America. He was elevated to cardinal three months later.

At least one priest warned the Vatican against the appointment. The Rev. Boniface Ramsey said that when he was on the faculty at the Immaculate Conception Seminary at Seton Hall University in New Jersey from 1986 to 1996, he was told by seminarians about Archbishop McCarrick’s sexual abuse at the beach house. When Archbishop McCarrick was appointed to Washington, Father Ramsey spoke by phone with the pope’s representative in the nation’s capital, Archbishop Gabriel Montalvo, the papal nuncio, and at his encouragement sent a letter to the Vatican about Archbishop McCarrick’s history.

By August most Catholic watching was diverted to a different scandal that had erupted in late July.

The Attorney General of Pennsylvania released a report on an investigation into sexual crimes by Catholic priests. The numbers were staggering. Initially it sounded like sexual mayhem by a majority of priests. Then things quieted down when it turned out that it was a report that summarized prosecutions, settlements and “credible allegations,” over a period of decades. The press release from the Attorney General gave this summary of findings from the grand jury:

  • 301 Catholic priests identified as predator priests who sexually abused children while serving in active ministry in the church.

  • Detailed accounts of over 1,000 children victimized sexually by predator priests, with the grand jury noting it believed the real number of victims was in the “thousands.”

  • Senior church officials, including bishops, monsignors and others, knew about the abuse committed by priests, but routinely covered it up to avoid scandal, criminal charges against priests, and monetary damages to the dioceses.

  • Priests committed acts of sexual abuse upon children, and were routinely shuttled to other parishes – while parishioners were left unaware of sexual predators in their midst.

This truly ignited a media feeding frenzy. It also prompted other states to launch investigations of their own. People came out from all over with their own stories of abuse. Then, just as quickly as it had erupted and claimed all the media air, this scandal dropped out of the public conversation, almost as abruptly as the previous “pedophile priests” scandal of 2002-2005.

Enemy of the People

First, it turned out that there are only about 64 of the three hundred priests who are still alive. Most of these cases are really old. Then it came out that the overwhelming majority, all but a few dozen, of the victims, were males ages 13 to 18 at the time of the abuse.

Yes, this is similar to the previous scandal. When they learned it was not “pedophilia,” but homosexuals preying on minor teen boys, Leftist mass media lost interest.

Leftist mass media cried “pedophile priests! Pedophile priests!” until traditionalist Catholics started to get some traction with their push-back, and then media dropped the story before they had to run any corrections.

Just like Cardinal McCarrick. Since the predatory misconduct was all homosexual, then it did not help Leftist narratives to report on the scandal. After some hystrionical Catholic bashing, the story quickly dropped out of sight.

Of course, by late August, the allegations against Brett Kavanaugh started taking all the oxygen out of the journalistic world. Between Kavanaugh and the mid-term election campaigns, it was easy for news media to drop the Catholic scandals.

Bad Boy Bishops

The really big scandal in all of this was the way Catholic bishops had kept it all under wraps. They knew about homosexual predator priests and covered up for them, hid them, hired lawyers to quickly bring aggrieved victims into settlements that featured non-disclosure agreements, moved bad priests around, and hid the records. They were afraid such matters would damage the church, but in the end what they did was far more damaging.

Pope Francis apologizes

From Wikipedia:

On 20 August 2018, Pope Francis apologized in a 2,000 word letter [addressing the Pennsylvania] grand jury report confirming that over 1,000 children were sexually abused by “predator priests” in Pennsylvania for decades, often covered up by the Church.[333]

“With shame and repentance, we acknowledge as an ecclesial community that we were not where we should have been, that we did not act in a timely manner, realizing the magnitude and the gravity of the damage done to so many lives … We showed no care for the little ones; we abandoned them … The heart-wrenching pain of these victims, which cries out to heaven, was long ignored, kept quiet or silenced.”

The Pope said the church was developing a “zero tolerance” policy on abuse (which he called “crimes”) and cover-ups. Vatican spokesman Greg Burke emphasized that the letter was not about incidents in a specific geographic area but relevant worldwide.[334]

 

Archbishop Carlo Maria Viganò

Archbishop Viganò had served for ten years as the head of personnel for the Vatican when he was elevated to Secretary General of Vatican City. He made a name for himself by cleaning up finances and installing better procedures for checking and auditing. In a really mysterious episode, a letter he had written about apparent corruption was leaked to the press, prompting a public shaming in which he was said to be embarrassingly wrong by three higher-ups in the Curia. Vatican watchers took different sides, and some said it was all over personality clashes. Then the higher-ups prevailed and Pope Benedict XVI assigned Archbishop Viganò as the Apostolic Nuncio to the United States in 2011.

Viganò retired in 2016 at the age of 75 and returned to Italy.

On August 25, a damning letter from Archbishop Viganò appeared. It broke in Italy. It also hit the English-speaking Catholic blogosphere, because Archbishop Viganò had copied his letter to a little pro-life blog and aggregator in Toronto. I posted about LifeSiteNews back in the fall, mostly about how they were being harmed by Facebook and other Silicon Valley tech giants. Here is a part of Viganòs letter:

To dispel suspicions insinuated in several recent articles, I will immediately say that the Apostolic Nuncios in the United States, Gabriel Montalvo and Pietro Sambi, both prematurely deceased, did not fail to inform the Holy See immediately, as soon as they learned of Archbishop McCarrick’s gravely immoral behavior with seminarians and priests. Indeed, according to what Nuncio Pietro Sambi wrote, Father Boniface Ramsey, O.P.’s letter, dated November 22, 2000, was written at the request of the late Nuncio Montalvo. In the letter, Father Ramsey, who had been a professor at the diocesan seminary in Newark from the end of the ’80s until 1996, affirms that there was a recurring rumor in the seminary that the Archbishop “shared his bed with seminarians,” inviting five at a time to spend the weekend with him at his beach house. And he added that he knew a certain number of seminarians, some of whom were later ordained priests for the Archdiocese of Newark, who had been invited to this beach house and had shared a bed with the Archbishop.

The office that I held at the time was not informed of any measure taken by the Holy See after those charges were brought by Nuncio Montalvo at the end of 2000, when Cardinal Angelo Sodano was Secretary of State.

Likewise, Nuncio Sambi transmitted to the Cardinal Secretary of State, Tarcisio Bertone, an Indictment Memorandum against McCarrick by the priest Gregory Littleton of the diocese of Charlotte, who was reduced to the lay state for a violation of minors, together with two documents from the same Littleton, in which he recounted his tragic story of sexual abuse by the then-Archbishop of Newark and several other priests and seminarians. The Nuncio added that Littleton had already forwarded his Memorandum to about twenty people, including civil and ecclesiastical judicial authorities, police and lawyers, in June 2006, and that it was therefore very likely that the news would soon be made public. He therefore called for a prompt intervention by the Holy See.

In writing up a memo[1] on these documents that were entrusted to me, as Delegate for Pontifical Representations, on December 6, 2006, I wrote to my superiors, Cardinal Tarcisio Bertone and the Substitute Leonardo Sandri, that the facts attributed to McCarrick by Littleton were of such gravity and vileness as to provoke bewilderment, a sense of disgust, deep sorrow and bitterness in the reader, and that they constituted the crimes of seducing, requesting depraved acts of seminarians and priests, repeatedly and simultaneously with several people, derision of a young seminarian who tried to resist the Archbishop’s seductions in the presence of two other priests, absolution of the accomplices in these depraved acts, sacrilegious celebration of the Eucharist with the same priests after committing such acts.

Leftist Catholic defenders of Pope Francis and the Lavender Mafia went into full character assassination mode. Archbishop Viganò went into hiding.

Viganò has since released two additional letters, from his hiding place, defending himself and describing correspondence that he says vindicates him. The letters he references have not been produced.

GetReligion

In the case of the current Catholic disgrace, Leftist mass media is all on one side. They want to trash the Catholic Church as home to “pedophile priests” while hiding the fact that the scandal is actually a homosexual scandal. Mass media love Pope Francis and act to trash anyone who says anything that makes Pope Francis look bad.

On the other side are intrepid traditionalist Catholic bloggers. They are outgunned, overmanned, overwhelmed, and demoted, deplatformed, censored, slandered and libeled. It takes real determination to cut through the noise with their message.

I have found that the best way to follow any major story that centers on religion is to monitor the media critics at GetReligion.org. These are mostly “pro-life Democrats,” and they are Christian journalists who have focused on religion stories for many years. Their focus has been to observe on the journalistic failures in mass media news reporting that can be attributed to journalists simply not having the background, or not understanding the jargon, or not caring about whether they get the details right, when it comes to religious issues. At GetReligion.org I frequently see not only where and how the journalists failed, but then I also get correct information and links to the best-quality reporting on any religious issue.

Back in the spring when the Cardinal McCarrick story first came out, there was a really interesting post at GetReligion by Julia Duin. She had wanted to write about McCarrick the sexual abuser for many years, but could not get any sources willing to go on record:

I ran into similar blockages everywhere. There were priests and laity alike for whom McCarrick’s predilections were an open secret, but no one wanted to go after him. I heard about various settlements but couldn’t confirm the details. No newspaper can publish such explosive accusations with only anonymous sources and no court documents to back it up.

Various Catholic friends advised me to let it go. “What difference does it make now?” they’d say. “McCarrick is retired.” The archdiocese was represented by a powerful law firm. Did I want to take that on?

After I was laid off in 2010, I sent copies of my files to another reporter on the East Coast so he could have a go at cracking this story. He too ran into the same barriers: People who refused to go on the record and there was always the threat of a lawsuit should he get one detail wrong.

One thing I learned from GetReligion.org was that Theodore McCarrick had a golden rolodex, and had been a very large fundraiser for all sorts of Catholic projects in a variety of locations involving lots of wealthy Catholic movers and shakers and touching on over a dozen major Catholic missions/charities.

 The lead guy at GetReligion is Terry Mattingly, who writes a weekly newspaper column for the Universal Uclick Syndicate. He also does a podcast called “Crossroads.” He had a really interesting discussion about Pope Francis’s upcoming big international assembly of bishops. Mattingly makes a lot of sense most of the time.

What to expect

In the podcast Mattingly discusses the recent resignation of top Vatican spokesman Greg Burke with Todd Wilkins, the “Crossroads” M.C.. (They decided that Burke has a news background rather than marketing, and he doesn’t want to be unable to return to journalism, which he probably would be if he continued in his current position through the upcoming assembly.)

Mattingly said to expect that the assembly is very likely to focus on the grave sin of sexual abuse of children. There are a few dozen actual cases of children under age 12 who were abused, and there are a few cases in which girls were abused. Expect the whole assembly to focus on those cases. As much as possible, the entire proceedings will be engineered to avoid the word “homosexual.” The sex of victims will seldom be mentioned. There will be lots of room for journalists to cover the event without ever noting that the core of these scandals is homosexuality in the priesthood. Unchaste acts between consenting adults will not be mentioned. And you won’t find them saying out loud that the age of majority in Catholic canon law is sixteen, not eighteen.

In short, expect a whitewashing. Surprised? Probably not; we have all grown quite cynical, haven’t we?

11+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

You shall have no other gods

The world takes offense at God.   The Pagans hated the idea that their gods have no real power.   The Muslims and the Mormons are counterfeits of God.   Buddhists say that “God” is not a real person, but a spiritual amalgam.   Atheists are angry at the God they don’t believe in.

Communists, being Atheists, are angry at God.   So the latest outrage from China comes as no surprise.   The First Commandment is outlawed.   It started recently in just one church that we know of, but the Party statements indicate that this is a nationwide China policy:

One of the officials explained, however, that Chinese President Xi Jinping “opposes the statement,” referring to the first commandment.

“Who dares not to cooperate? If anyone doesn’t agree, they are fighting against the country,” the official warned. “This is a national policy. You should have a clear understanding of the situation. Don’t go against the government.”

The church was forced to take down the Ten Commandments sign that day.

9+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

“Tele-Monster” update. (the saga continues….)

It almost make me want to terminate my land line.

There is one persistent “tele-monster” that had called 15 times in the past 2 weeks. Last night I had enough. Granted NoMoRobo does detect these scam calls and terminates them as soon as they can get the caller ID. Unfortunately the caller ID is transmitted after the first ring, so I must endure many single rings from my land line phones. Tired of this, I set up my phone system through my provider so that when another call would come in and I knew it would, it would forward that call and only that call back to the point of origin.

(Click Continue reading below to see the screen capture.) Continue reading ““Tele-Monster” update. (the saga continues….)”

7+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Apple’s Tim Cook on Deplatforming

Listen to the NPCs in the audience clap like trained seals as the message of censorship is delivered.  “Because it’s the right thing to do. … our values drive our curation decisions.”

Continue reading “Apple’s Tim Cook on Deplatforming”

7+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Defense Distributed Declaration

In the ongoing litigation between Defense Distributed and state attorneys general over the distribution of three-dimensional models of firearms and components thereof over the Internet (which has been approved by all federal regulatory agencies), I was asked to submit an affidavit in support of the Defense Distributed case.  I have previously described this case here in my post “Code is Speech”.

Here is what I drafted which, after consultation with others whose efforts are much appreciated but will remain unnamed, will be submitted into the public record.  This is exactly what was submitted, less my signature: why make it easy for identity thieves?  This was submitted, as is done, in gnarly monospaced text with no mark-up.  If it shows up in your browser with awkward line breaks, try making the browser window wider and it should get better.   If you’re on a tablet or mobile phone, try it when you get back to the desktop.

The opening and closing paragraphs are as prescribed in 28 U.S.C. § 1746 for an “Unsworn declaration under penalty of perjury” by a non-U.S. person.  This is also called a “self-authenticating affidavit”.

This may seem lukewarm to those accustomed to my usual firebrand rhetoric.  In this declaration, I only wanted to state things which I knew or believed based upon my own personal experience.  Consequently, I eschewed discussing the state of the art in additive manufacturing (I have never actually seen nor used an additive manufacturing machine) or the limitations of present-day machines (all of that may, and probably will, change in a few years).

Attorneys for Defense Distributed expect to lose in the original district court litigation and the Ninth Circuit, but the purpose of this declaration is to be used in higher court appeals where there is a less ideological and more fact-based scrutiny of cases.

Although I really had better things to do this week, I was glad to take the time to support the Defense Distributed case.  Even if you don’t care about guns, the attorneys’ general position in this case argues that computer-mediated speech: the transmission of files from computer to computer, is not speech protected by the First Amendment.  This is arguably the greatest assault on free speech since the adoption of that amendment.

I am privileged to have the opportunity to oppose it.

(This declaration is a public document which will become part of the record of the trial and eventual appeals.  I am disclosing nothing here which will not be available to those following the litigation.)

                DECLARATION OF JOHN WALKER

I, John Walker, pursuant to 28 U.S.C. § 1746 hereby declare and
say as follows:

    1.  I was a co-founder of Autodesk, Inc. (ADSK:NASDAQ),
        developer of the AutoCAD® computer-aided design
        software.  I was president, chairman, and chief
        executive officer from the incorporation of the company
        in April 1982 until November 1986, more than a year
        after its initial public stock offering in June 1985. I
        continued to serve as chairman of the board of directors
        until April 1988, after which I concentrated on software
        development.

    2.  Autodesk is the developer of the AutoCAD® software, one
        of the most widely-used computer-aided design and
        drafting software packages in the world.  AutoCAD allows
        creation of two- and three-dimensional models of designs
        and, with third-party products, their analysis and
        fabrication.

    3.  During the start-up phase of Autodesk, I was one of the
        three principal software developers of AutoCAD and wrote
        around one third of the source code of the initial
        release of the program.

    4.  Subsequently, I contributed to the development of
        three-dimensional extensions of the original AutoCAD
        drafting system, was lead developer on AutoShade[tm],
        which produced realistic renderings of three-dimensional
        models, and developed the prototype of integration of
        constructive solid geometry into AutoCAD, which was
        subsequently marketed as the AutoCAD Advanced Modeling
        Extension (AME).

    5.  I retired from Autodesk in 1994 and since have had no
        connection with the company other than as a shareholder
        with less than 5% ownership of the company's common
        stock.

    Design Versus Fabrication

    6.  From my experience at Autodesk, I became aware of the
        distinction between the design of an object and the
        fabrication of that object from the design.  For
        example, the patent drawings and written description in
        firearms patents provide sufficient information "as to
        enable any person skilled in the art to which it
        pertains, or with which it is most nearly connected, to
        make and use the same, and shall set forth the best mode
        contemplated by the inventor or joint inventor of
        carrying out the invention" [35 U.S.C. § 112 (a)].  But
        this is in no way a mechanical process.  One must
        interpret the design, choose materials suitable for each
        component, and then decide which manufacturing process
        (milling, stamping, turning, casting, etc.) is best to
        produce it, including steps such as heat-treating and
        the application of coatings.  This process is called
        "production planning", and it is a human skill that is
        required to turn a design, published in a patent
        description or elsewhere, into a physical realisation of
        the object described by that design.

    7.  A three-dimensional model of an object specifies its
        geometry but does not specify the materials from which
        it is fabricated, how the fabrication is done, or any
        special steps required (for example, annealing or other
        heat treating, coatings, etc.) before the component is
        assembled into the design.

    8.  Three-dimensional models of physical objects have many
        other applications than computer-aided manufacturing.
        Three-dimensional models are built to permit analysis of
        designs including structural strength and heat flow via
        the finite element method.  Models permit rendering of
        realistic graphic images for product visualisation,
        illustration, and the production of training and service
        documentation.  Models can be used in simulations to
        study the properties and operation of designs prior to
        physically manufacturing them. Models for finite element
        analysis have been built since the 1960s, decades before
        the first additive manufacturing machines were
        demonstrated in the 1980s.

    9.  Some three-dimensional models contain information which
        goes well beyond a geometric description of an object
        for manufacturing.  For example, it is common to produce
        "parametric" models which describe a family of objects
        which can be generated by varying a set of inputs
        ("parameters").  For example, a three-dimensional model
        of a shoe could be parameterised to generate left and
        right shoes of various sizes and widths, with
        information within the model automatically adjusting the
        dimensions of the components of the shoe accordingly.
        The model is thus not the rote expression of a
        particular manufactured object but rather a description
        of a potentially unlimited number of objects where the
        intent of the human designer, in setting the parameters,
        determines the precise geometry of an object built from
        the model.

   10.  A three-dimensional model often expresses relationships
        among components of the model which facilitate analysis
        and parametric design.  Such a model can be thought of
        like a spreadsheet, in which the value of cells are
        determined by their mathematical relationships to other
        cells, as opposed to a static table of numbers printed
        on paper.

    Additive Manufacturing ("3D Printing")

   11.  Additive manufacturing (often called, confusingly, "3D
        [for three-dimensional] printing") is a technology by
        which objects are built to the specifications of a
        three-dimensional computer model by a device which
        fabricates the object by adding material according to
        the design.  Most existing additive manufacturing
        devices can only use a single material in a production
        run, which limits the complexity of objects they can
        fabricate.

   12.  Additive manufacturing, thus, builds up a part by adding
        material, while subtractive manufacturing (for example,
        milling, turning, and drilling) starts with a block of
        solid material and cuts away until the desired part is
        left.  Many machine shops have tools of both kinds, and
        these tools may be computer controlled.

   13.  Additive manufacturing is an alternative to traditional
        kinds of manufacturing such as milling, turning, and
        cutting.  With few exceptions, any object which can be
        produced by additive manufacturing can be produced, from
        paper drawings or their electronic equivalent, with
        machine tools that date from the 19th century.  Additive
        manufacturing is simply another machine tool, and the
        choice of whether to use it or other tools is a matter
        of economics and the properties of the part being
        manufactured.

   14.  Over time, machine tools have become easier to use.  The
        introduction of computer numerical control (CNC) machine
        tools has dramatically reduced the manual labour
        required to manufacture parts from a design.  The
        computer-aided design industry, of which Autodesk is a
        part, has, over the last half-century, reduced the cost
        of going from concept to manufactured part, increasing
        the productivity and competitiveness of firms which
        adopt it and decreasing the cost of products they make.
        Additive manufacturing is one of a variety of CNC
        machine tools in use today.

   15.  It is in no sense true that additive manufacturing
        allows the production of functional objects such as
        firearms from design files without human intervention.
        Just as a human trying to fabricate a firearm from its
        description in a patent filing (available in electronic
        form, like the additive manufacturing model), one must
        choose the proper material, its treatment, and how it is
        assembled into the completed product.  Thus, an additive
        manufacturing file describing the geometry of a
        component of a firearm is no more an actual firearm than
        a patent drawing of a firearm (published worldwide in
        electronic form by the U.S. Patent and Trademark Office)
        is a firearm.

    Computer Code and Speech

   16.  Computer programs and data files are indistinguishable
        from speech.  A computer file, including a
        three-dimensional model for additive manufacturing, can
        be expressed as text which one can print in a newspaper
        or pamphlet, declaim from a soapbox, or distribute via
        other media.  It may be boring to those unacquainted
        with its idioms, but it is speech nonetheless.  There is
        no basis on which to claim that computer code is not
        subject to the same protections as verbal speech or
        printed material.

   17.  For example, the following is the definition of a unit
        cube in the STL language used to to express models for
        many additive manufacturing devices.

            solid cube_corner
              facet normal 0.0 -1.0 0.0
                outer loop
                  vertex 0.0 0.0 0.0
                  vertex 1.0 0.0 0.0
                  vertex 0.0 0.0 1.0
                endloop
              endfacet
            endsolid

        This text can be written, read, and understood by a
        human familiar with the technology as well as by a
        computer.  It is entirely equivalent to a description of
        a unit cube written in English or another human
        language.  When read by a computer, it can be used for
        structural analysis, image rendering, simulation, and
        other applications as well as additive manufacturing.
        The fact that the STL language can be read by a computer
        in no way changes the fact that it is text, and thus,
        speech.

   18.  As an additional example, the following is an AutoCAD
        DXF[tm] file describing a two-dimensional line between
        the points (0, 0) and (1, 1), placed on layer 0 of a
        model.

            0
            SECTION
              2
            ENTITIES
              0
            LINE
              8
            0
             10
            0.0
            20
            0.0
            11
            1.0
            21
            1.0
              0
            ENDSEC
              0
            EOF

        Again, while perhaps not as easy to read as the STL file
        until a human has learned the structure of the file,
        this is clearly text, and thus speech.

   19.  It is common in computer programming and computer-aided
        design to consider computer code and data files written
        in textual form as simultaneously communicating to
        humans and computers.  Donald E. Knuth, professor
        emeritus of computer science at Stanford University and
        author of "The Art of Computer Programming", advised
        programmers:
            "Instead of imagining that our main task is to
            instruct a computer what to do, let us concentrate
            rather on explaining to human beings what we want a
            computer to do."[Knuth 1992]
        A design file, such as those illustrated above in
        paragraphs 17 and 18 is, similarly, a description of a
        design to a human as well as to a computer.  If it is a
        description of a physical object, a human machinist
        could use it to manufacture the object just as the
        object could be fabricated from the verbal description
        and drawings in a patent.

   20.  Computer code has long been considered text
        indistinguishable from any other form of speech in
        written form.  Many books, consisting in substantial
        part of computer code, have been published and are
        treated for the purpose of copyright and other
        intellectual property law like any other literary work.
        For example the "Numerical Recipes"[Press] series of
        books presents computer code in a variety of programming
        languages which implements fundamental algorithms for
        numerical computation.

    Conclusions

   21.  There is a clear distinction between the design of an
        artefact, whether expressed in paper drawings, a written
        description, or a digital geometric model, and an object
        manufactured from that design.

   22.  Manufacturing an artefact from a design, however
        expressed, is a process involving human judgement in
        selecting materials and the tools used to fabricate
        parts from it.

   23.  Additive manufacturing ("3D printing") is one of a
        variety of tools which can be used to fabricate parts.
        It is in no way qualitatively different from alternative
        tools such as milling machines, lathes, drills, saws,
        etc., all of which can be computer controlled.

   24.  A digital geometric model of an object is one form of
        description which can guide its fabrication.  As such,
        it is entirely equivalent to, for example, a dimensioned
        drawing (blueprint) from which a machinist works.

   25.  Digital geometric models of objects can be expressed
        as text which can be printed on paper or read aloud
        as well as stored and transmitted electronically.
        Thus they are speech.

    References
        [Knuth 1992]   Knuth, Donald E.  Literate Programming.
                       Stanford, CA: Center for the Study of
                       Language and Information, 1992.
                       ISBN: 978-0-937073-80-3.

        [Press]        Press, William H. et al.  Numerical Recipes.
                       Cambridge (UK): Cambridge University Press,
                       (various dates).
                       Programming language editions:
                           C++     978-0-521-88068-8
                           C       978-0-521-43108-8
                           Fortran 978-0-521-43064-7
                           Pascal  978-0-521-37516-0

I declare under penalty of perjury under the laws of the United
States of America that the foregoing is true and correct.

Executed on November 22, 2018

                                            (Signature)
                                 _______________________________
                                           John Walker
14+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Michael Rectenwald on Postmodernism, Social Justice, and Academic Conformity

Professor Michael Rectenwald of New York University used to describe himself as a “libertarian communist” and spent many years embedded in the leftist milieu of the academy.  He then underwent an awakening to the madness of political correctness, the social justice agenda, and the absurdity of postmodern intersectional critical studies of dozens of genders and began to speak out on Twitter, eventually publishing Springtime for Snowflakes, a book about his experiences and what he learned.

Here is an hour and a half interview of Prof. Rectenwald by Glenn Beck on the latter’s podcast.

This is long, but it provides an in-depth look at the history, intellectual roots, and fundamental errors of the disease which has infected the campuses and is spreading into the larger society.  Say what you want about Glenn Beck, he is a superb interviewer who gets out of the way and lets the guest speak directly to the audience.

7+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: Life After Google

“Life after Google” by George GilderIn his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.

Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.

The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.

This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.

As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.

The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers’ sites but did not publish themselves).

While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.

All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody’s data and resell their computing power and access at attractive prices.

As a example of the magnitude of the economies of scale we’re talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon’s making money on the deal.

This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.

As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we’re all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.

The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user’s behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.

The fundamental flaw in Karl Marx’s economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author’s view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it’s easier to understand the politics of the barons of Silicon Valley.

The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I’m not going there), but it’s profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I’ll discuss that later.) As with physical damage, top-down censorship does not work because there’s no top.

But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators’ revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn’t it great!’ ” Today there’s a top, and those on top control the speech of everything that flows through their data silos.

This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users’ most precious possessions: their time and attention) were in large part the consequence of the Internet’s lacking three fundamental architectural layers: security, trust, and transactions. Let’s explore them.

Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today’s Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it’s bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users’ security.

Trust. As indispensable as security is knowing to whom you’re talking. For example, when you connect to your bank’s Web site, how do you know you’re actually talking to their server and not some criminal whose computer has spoofed your computer’s domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it’s vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.

Transactions. Business is all about transactions; if you aren’t doing transactions, you aren’t in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user’s “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they’d have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.

So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users’ personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we’ve put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.

At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.

Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:

As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you’ll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they’ll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:

This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it’s way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn’t always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.

There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that’s so it simply means we’ll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.

He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don’t get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn’t much difference between a very slowly increasing supply and a static one. If you’re on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn’t matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.

Gilder further argues, as he did in The Scandal of Money, that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I’ll take his word on the latter point; it’s his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.

Gilder’s books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn’t make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.

Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.

Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.

This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.

12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

LifeSiteNewsDotCom

LifeSiteNews is a small anti-abortion activist group, pro-life journalism outlet, and news aggregator. It was launched in 1997 as a spinoff of Campaign Life Coalition. Both are based in Toronto. Unless you are a traditionalist Catholic or a pro-life culture warrior, you probably never heard of them.

They have had a lot of excitement lately.   For a year they have been fighting for their life as an organization. They had become very dependent on their Facebook page as their primary way to communicate with their network of donors, most of whom are Catholic families making small-time contributions. Facebook has been waging war against them.

Facebook ghetto

In addition to filtering them out of searches and giving them the “shadow ban” treatment, Facebook has refused to run their ads:

One response that our team received as the reason for Facebook’s disapproval of our ads is equally concerning. The ad pertaining to this response simply showed an image of a pregnant mother holding a photo of her baby’s ultrasound…

I do see that the ad has a fetus and while it involves your ad text and topic, it may be viewed too strong for Facebook to allow to show.

Such viewpoint discrimination is a direct attack on our shared life and family values, and is greatly affecting our efforts to fundraise and spread our news.

Yes, a pregnant woman showing off the ultrasound picture of her baby is “too strong” for Facebook. That is a transparent excuse that says Facebook does not like advocacy for babies. Facebook is enforcing the Culture of Death.

They do this by decreeing that accurately reporting on the abortion industry and Planned Parenthood is “fake news.” Truth is irrelevant; what matters is the narrative.

Facebook recently admitted to combating “fake news” by developing a system that ranks users’ trustworthiness on a scale from 1 to 10. This is determined by users’ opinions rather than objective investigations!

This means that aggressively pro-choice and anti-family Facebook users can rank LifeSiteNews as “untrustworthy” with the simple click of a button – just because they dislike the facts that we publish.

Facebook has therefore made it ridiculously easy for our highly organized, well-financed (George Soros, etc) and hateful opponents to have LifeSiteNews wrongly categorized as “fake news” and our traffic suppressed according to Facebook’s “terms of agreement.” Truth does not matter according to this mob-mentality-serving process.  

Sex scandals

If you are wondering where it was that you recently saw their name, it was because they landed the biggest Catholic scoop of August. In the middle of the Catholic summer of distress over new sex scandals, Archbishop Viganò released a letter that said that Pope Francis and the rest of the Vatican were aware of Cardinal McCarrick’s habit of pressing young seminarians for sex, and also that he had covered for homosexual priests who preyed on teenage boys. Pope Francis had rehabilitated McCarrick in spite of this knowledge.

Archbishop Viganò gave his letter to two conservative Italian journalists that he trusts. He also sent it to LifeSiteNews. Evidently that was the only English-language outlet that he trusts.

Since then, other traditionalist Catholics have gone directly to LifeSiteNews with background and new developments on these scandals.

Search and you will not find

Facebook is not the only internet service that is hostile to pro-life advocates. Several news aggregators have the habit of demoting LifeSiteNews as well as other conservative outlets. So for the past weeks we have seen searches that turned up dozens of articles and editorials that cited LifeSiteNews, but unless you type “lifesitenews” in your search, you will not see their original reporting on the first four pages of results.

Allies

I am not a Catholic. As a Lutheran, the Church of Rome teaches that I am condemned to hell as a Schismatic. Nevertheless I have several Catholic friends, and I find that traditionalist Catholics are my most trustworthy allies in the culture wars. I need strong Catholics to help rescue western civilization from the assaults of Satan.

Please consider giving a little support to LifeSiteNews, either with a few bucks, or by sharing their plight with your Catholic and pro-life friends.

9+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Secret Star Chamber Holds Controversial Patent Applications in Limbo

An amazing abuse of government power has been uncovered at the U.S. Patent and Trademark Office. Apparently, a secret program existed until recently to flag potentially controversial patents and refuse to issue them, even if they met all the usual requirements.

“The patent office has delayed and delayed. I’m finally hoping to get to the board of appeals and to the courts to stop the delays and get my patents issued,” Hyatt said in our interview.

He alleges that the SAWS program, which was started in 1994 not long after Hyatt’s case generated huge publicity, disproportionately targeted individual inventors or small businesses. The SAWS program came to light in late 2014, and Hyatt and his attorneys allege that patent office officials used the program to secretly exercise powers with which they were never vested by law, allowing them to choose winners and losers in controversial patent cases.

A patent with the SAWS designation could not be issued without approval from higher ups, and Hyatt found that his applications were designated as such. Patent examiners were instructed not to tell applicants that their applications had the SAWS flag. After getting 75 patents, Hyatt did not get another one since the 1990s, and for a long time he didn’t know why.

Continue reading “Secret Star Chamber Holds Controversial Patent Applications in Limbo”

5+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Give This a Read

http://thefederalist.com/2018/08/17/screenshots-show-google-shadowbans-conservative-pro-trump-content/

In the article a video was considered hate speech because the crawler had “witch hunt”. I can’t believe it but I couldn’t believe “Never Trumper” was considered forbidden either. Why are people considered children with the inability to handle any negativity? Are we in the age of needing “padded rooms” so no one gets hurt on the Internet?

People need to know what they are seeing is what someone wants them to see. The filter is in. The fight is fixed. It reminds me of this.

“We control the …”

8+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar