evil Google — curator of your information flow

I put up a couple of recent posts to observe on a surge of obituaries on the news business.   But all y’all internet denizens are still reading news.   You just don’t take daily newspapers anymore, and only a very tiny share of y’all subscribe to any kind of news provider.   That is the way most of us operate any more.   But information flow, especially the flow of recent events information, aka ‘news,’ is now screened for most receivers of news by evil Google.

Yeah, I know; some Ratburghers are boycotting Google by using DuckDuckGo and a couple of other search engine alternatives.   But the problem remains.   Over half of all news articles that are accessed on the internet were landed on through an evil-Google search, which means that evil Google gets a shot at screening the news for over half of all internet news consumption.   This finding comes from a Northwestern University study that was recently presented at the “2019 Conference on Human Factors in Computing Systems” that was held in Glasgow.

Continue reading “evil Google — curator of your information flow”

4+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar

The Threat of China

China has been attacking the U.S.A. ever since the days of Richard Nixon, in many ways subtle and not subtle.   But their attacks have grown more devious, more corrupting, and are preparing them for assaults on America that will be devastating when they are unleashed.

Yes, they have been spying and stealing technical secrets, violating copyrights, trademarks and the plain language of contracts for decades.   But the current state of affairs calls for a confrontation, and I am glad to see President Trump bring a confrontation that is clever and likely to succeed.

I am not prepared to debate the trade issues in the tariffs dispute.   What has me concerned at the moment is the leverage China is gaining over our internet.   It appears that evil Google is preparing to act as an agent of China to destroy America.

I think that if things keep going the way they are, China will position themselves to be able to kill American internet and cellphone communications, while disabling large portions of basic utilities such as electric power transmission and landline phone communications.

I will put links in a comment.   The first item is testimony this week by FCC Chair Ajit Pai, regarding the threat posed by Huawei if they could get embedded into our cellphone services:

“What I will say,” Pai told [Sen. James] Lankford, “is I believe that certain Chinese suppliers, such as Huawei, do indeed present a threat to the United States, either on their own or because of Chinese domestic law. For example, China’s national intelligence law explicitly requires any individual or entity subject to that law to comply with requests to intelligence services.”    He said that poses a problem for 5G networks deployed in one country that could be managed by software that is resident in another country.

The second item is a column at American Greatness by Brandon J. Weichert:

“A greater synthesis between the national security sector, the business community, academia, and the political leadership of the United States is needed if we truly and effectively want to prevent American tech firms from building the weapons of tomorrow for China to use against us today.”

5+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saving Journalism

Fake News, you say?  Indeed, this is to discuss the turmoil in the field of journalism, which is both a cause and a consequence of the Leftist tilt of the entire field.   Journalism is in crisis, you see, and Leftist media watchers are looking for scapegoats.   President Trump figures high on their enemies list, with his “fake news!,” “Enemy of the People,” and Sarah Huckabee Sanders.   See my previous post on this topic.   In that post I reacted to a journalist who blamed the end of professionalism in journalism on President Trump.   In this post I will discuss the reasons for the collapse of journalism as we knew it.

I am happy to see the recent obituaries for Big Journalism.  But before we discuss the real problems with journalism, please consider what the crisis looks like to the journalists.   There have been a rash of articles and editorials from journalists that have expressed fear and frustration.   This is an excerpt from an article that appeared in the New Yorker back in January:

Conglomeration can be good for business, but it has generally been bad for journalism. Media companies that want to get bigger tend to swallow up other media companies, suppressing competition and taking on debt, which makes publishers cowards.  …  Craigslist went online in the Bay Area in 1996 and spread across the continent like a weed, choking off local newspapers’ most reliable source of revenue: classified ads.  …  By 2000, only three hundred and fifty of the fifteen hundred daily newspapers left in the United States were independently owned.  …  Then came the fall, when papers all over the country, shackled to mammoth corporations and a lumbering, century-old business model, found themselves unable to compete with the upstarts—online news aggregators like the Huffington Post (est. 2005) and  Breitbart News (est. 2007), which were, to readers, free. News aggregators also drew display advertisers away from print; Facebook and Google swallowed advertising accounts whole. Big papers found ways to adapt; smaller papers mainly folded.

(When researching for this post, I saw an article from 2016 that said local newspapers had shed 60 percent of their workforce over the previous 26 years.)

In January of this year they had a particularly tough day, in which 1,000 journalism jobs were chopped in one day.

Now, I have been part of several Ratburgher discussions in which we generally agreed that mass media journalism is the Enemy of the People, so I don’t expect to hear a lot of sympathy for the journalists here.   But there is a problem that I want to address.

Where does news come from?

Yes, there are some intrepid conservative organizations that do great investigative journalism.   But they are few in number and are concentrated on political matters.   When your local paper dies, how do you get local news about the ordinary life of your community?   You would have to join a dozen local blog sites to be able to continue to be aware of the shenanigans at City Hall, or the hoo-rah at the School Board, or the embezzler in the suburbs, or the police blotter, or area high school sports, or any of a number of local matters.   You might not be very much interested in any of those matters, but it used to be that you could be generally well-informed about the community you live in by just skimming the headlines in the local paper on a regular basis.

Those days are gone.   My local Memphis paper is now owned by the USA Today Network, which is part of Gannett.   The people who lay out the paper work in a rival city in another state (Louisville).   Shortly before I canceled my subscription last year they ran an article in the “Local News” section about an industrial park.   That industrial park is in my state, but it is a seven-hour drive from my city.   So much for “local news.”   It was fine in two other papers that are owned by the USA Today Network, so it was just too easy to pretend that it belonged in our paper, too.   Their “customer support” is in the Philippines, Sales is in Phoenix, and the payment processing center is in Cincinnati.

So, what now?   There are the local TV stations, but they just pretend to do news.   They only have “reporters” who are transcribers.   They look into stories after they are alerted by citizens who call, or mostly they just pass along the police blotter and the stuff that comes to them in press releases.   After they learn that something is going on, they scramble a camera guy (no longer a camera crew) to race out and act like they covered the event for hours.  Also we have a couple of local blog sites that are attempting to make a name for themselves as the go-to place for local news.   But they are the same old Leftist journalists who recently lost their jobs due to downsizing at the newspaper, and so their political coverage is the same old Leftist bilge through and through.

Killed by the Internet

Local papers were killed by the internet.   On the internet, “information wants to be free.”   Local stories get picked up by aggregator services, and it became really easy to check out Google News for local news.   Facebook tried to provide local news links for a while, but the way they promoted Leftist news and suppressed conservative news caused such a backlash that they dropped that effort.

What gets blamed a lot for killing local papers is Craigslist, which is where all the classified ads went.  But the real culprits are Google and Facebook, which now have all the ads by the big chain retailers.

But if there is no local paper, then Google cannot steal their news any more.   Nor can Facebook or any conservative alternative aggregator.

Follow the Money

There was about 129 billion dollars in digital advertising in America last year.   Google slurped up about half.  Facebook took in about 25%.  Youtube, Instagram, Microsoft, Verizon and Amazon combined for about 22%.   All newspapers combined brought in about one percent.   All magazines combined brought in about one percent.   Craigslist brought in about one percent.

Facebook and Google to the rescue?

So I was sort of amused to see that both Facebook and Google have new initiatives to muscle in on the local news business.   Now that they have killed off the newspapers, they want to take over.   The trend going forward looks like our people becoming even more dependent on Google and Facebook.   This is not good.

Slow News

There have been several recent articles advocating “slow news.”   They come from journalists who are observing that the field of journalism has been overtaken by a rush to clickbait.   The Editor of NewYorker.com quoted Pablo Boczkowski, a professor of communications at Northwestern University:

“If you’re an average site, you have five to seven seconds to tell your story.”

The solution preferred by journalism ‘leading lights’ is the digital subscription model.   Only a handful of outlets are likely to survive via that model.   Journalists are hungry for readers who will read a full slate of news articles at one site, the way we used to read the morning newspaper over breakfast.   But, as Professor Boczkowski observed, contemporary consumers of news learn the news one click at a time from dozens of sources, mostly those that are shared on social media by their circle of Facebook friends or the people they follow on Twitter.

News Desert

A “news desert” is a place that does not have any source for local news.   Lots of America is heading into news desert status.

As happy as I am to see the obituaries for Big Journalism, we still need news.   How do we get real information about our community and our state?   Conservative and Christian niche media seem to me to do somewhat well on the national scene.   But I really hate the thought of being dependent on evil Google for information from my state capitol.

No Solutions

I don’t have any answers.   I suppose we will have to hope for a cadre of citizen journalists to blog the news of the day.   The problem is finding them amidst all the competing noise on the internet.   And, if they also blog with conservative opinions, then their posts will be suppressed when you try to search for them.

Perhaps all you Ratburghers could start posting local news here.   Ratburger.org could become a rival for Google and Facebook, right up until Google or Facebook noticed us and took us out.

Anybody out there have any bright ideas?

7+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

propaganda

Google categorized the new movie “Unplanned” as a “Drama/ Propaganda” film.   They have not categorized other films as propaganda, even films that are famous as examples of propaganda.

After the Christian blogosphere circulated this on Thursday, it got noticed in conservative niche media yesterday.   Today Google quietly changed the category of “Unplanned” back to “Drama.”   Shame on Google.

I expect that all y’all know that “Unplanned” is an anti-abortion film that tells the story of Abby Johnson, who famously flipped from being a successful abortion clinic manager for Planned Parenthood to becoming an anti-abortion activist.   Perhaps you also know that the MPAA slapped it with an R rating, even though it is devoid of sex, violence or bad language.   Maybe you also know that Facebook and Google have refused the publicist’s ads.   And that Google has been downgrading search results related to the movie.

This is just another day in the culture war.   Another state passed yet another anti-abortion law today, adding fuel to the fire that will ensure that Roe v Wade gets a return engagement at the Supreme Court.

Maybe someday America can stop murdering helpless babies.

LORD,  have mercy.

10+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Google “responsible development”

I am not suggesting that you perform a Google search for “responsible development.”   I just want to call attention to the demise of the “Responsible Development of AI Advisory Council” at Google.   It was just last week that Google announced the formation of the Advisory Council, intended to debate potential policy related to Artificial Intelligence.   They started announcing persons who were going to serve on the Advisory Council.   One of the persons named was Kay Coles James.

The histrionics from Google staffers were immediate and intense.   Tantrums were thrown and a clamor of angry rhetoric consumed much energy and attention for a couple of days.   Google promptly caved, and today they announced the dissolution of the Advisory Council.   Evidently placating the SJW staffers at Google was much more important than any effort to get ahead of the plethora of ethical pitfalls that beset the development of artificial intelligence.

What prompted the outrage?   Well, Ms. Kay Coles James is an African-American grandmother who held a variety of jobs in government and education.   She is currently on the Advisory Council for NASA.   And, by the way, she is also the current President of Heritage Foundation.

petition with more than 2,000 signatories from within the company was published on Medium on Monday, with the title “Googlers Against Transphobia and Hate.”

…Meredith Whittaker, who leads Google’s Open Research Group, posted on a private Google listserv that, “I would disagree that their views are important to consider, when those views include erasing trans people, targeting immigrants, and denying climate change.”

Other ringleader employees at Google vociferously trashed Heritage Foundation on a variety of charges, such as “anti-LGBTQ,” “anti-immigrant” and climate denial, etc.   When some employees said that the hoo-rah sounded intolerant, they were attacked with messages saying that there is no need to listen to such haters.

Both Daily Caller and Breitbart have the story.   Links are in the comments.

4+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar

Big Tech censorship

Donald Trump Jr. has an editorial at  The Hill,  about censorship on the internet.   He runs through a bill of particulars, which concern matters that we have talked about at Ratburger.org.   The following is the middle third of his editorial, which amounts to good old-fashioned journalism about something he saw while at CPAC.

 

Silicon Valley lobbyists have splashed millions of dollars all over the Washington swamp to play on conservatives’ innate faith in the free-market system and respect for private property. Even as Big Tech companies work to exclude us from the town square of the 21st century, they’ve been able to rely on misguided conservatives to carry water for them with irrelevant pedantry about whether the First Amendment applies in cases of social media censorship.

Sen. Josh Hawley (R-Mo.) has been making a name for himself as a Republican prepared to stand up to Big Tech malfeasance since his time as Missouri’s attorney general. He delivered a tour de force interview with The Wall Street Journal’s Kimberly Strassel in front of the CPAC crowd, one that provided a clear-eyed assessment of the ongoing affront to the freedoms of conservative speech and expression.

Hawley demolished the absurd notion that “conservative principles” preclude taking action to ensure free debate online simply because Big Tech firms — the most powerful corporations in the world — are private companies.

Hawley pointed out that Big Tech companies already enjoy “sweetheart deals” under current regulations that make their malfeasance a matter of public concern. Section 230 of the Communications Decency Act, for instance, allows them to avoid liability for the content that users post to their platforms. To address this problem, Hawley proposed adding a viewpoint neutrality requirement for platforms that benefit from Section 230’s protections, which were originally enacted to protect the internet as “a forum for a true diversity of political discourse.”

“Google and Facebook should not be a law unto themselves,” Hawley declared. “They should not be able to discriminate against conservatives. They should not be able to tell us we need to sit down and shut up!”

It’s high time other conservative politicians started heeding Hawley’s warnings….

I looked at Senator Hawley’s website, but did not see anything there on this topic.   I hope he will bring forward some good initiative.

9+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Google Made a Booboo

Turns out that some Nest products have built-in microphones, which fact was only recently disclosed to users. The possibilities for abuse are endless. Looks like Bruce Schneier’s predictions, expressed in his book Click Here to Kill Everybody, are coming true. From the book’s blurb:

From driverless cars to smart thermostats, from autonomous stock-trading systems to drones equipped with their own behavioral algorithms, the internet now has direct effects on the physical world. [emphasis added]

Don’t worry, though. Google admits that not disclosing the microphone “…was an error on our part.” Rest assured they are very sorry. You’ve had a hidden microphone in your house but don’t worry; nobody was listening. Move along; nothing to see (or hear).

6+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

It Isn’t the Cloud: It’s Somebody Else’s Computer

Google+ was launched in June 2011.  It was Google’s response to the rapid growth of Facebook and other social networks.  Just two weeks after its launch, 10 million users had joined.  By October 2013, 540 million users accessed one or more Google+ features.  People created text, images, uploaded images and media, and interacted on the network.  All of these data were stored on Google’s servers.

On October 8, 2018, Google announced that Google+ would be terminated in August 2019.   Subsequently, the shut-down date was moved up to April 2019.  This was due in part to a massive data breach discovered in the spring of 2018 which disclosed the personal data of 52.5 million users.  This was covered up by Google “due to fears of increased regulatory scrutiny”.  According to the October 2018 announcement, 90% of user sessions on Google+ lasted less than five seconds.

Here is the announcement of the shutdown sent to Google’s G Suite customers (which include mail for ratburger.org).  This will not affect ratburger.org’s mail, as we are a paying enterprise customer, not a user of the “consumer” product which is being terminated.

All data uploaded by users of Google+ will be deleted starting as early as April 2, 2019.  Users who do not export their data prior this deletion will permanently lose anything they’ve uploaded there.

There is no “cloud”.  When you hear “cloud”, think “somebody else’s computer”.  When “somebody else” decides storing your data is no longer worth doing, it’s gone.  It’s only your data if it’s in your own personal physical possession, ideally with multiple backup copies on archival media with long-term retention.

11+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

“The Creepy Line,” a Documentary Worth Watching…

…and a subject worth reviling and fearing – i.e. the power of Google and Facebook to shape society in the image they, completely unaccountably, deem best. The title, an understatement – “creepy ‘ is much too mild a descriptor – comes from a statement by Eric Schmidt, who in 2010, told an interviewer that Google’s policy is to “get right up to the creepy line and not cross it.”While that makes for a catchy title, it in no way captures the nefarious things being done by a company whose motto is (or was) supposedly, “Don’t be evil.” Facebook does similar things as well.

The film explains how Google began as a search engine, but became something very different. As a non-technical individual, I cannot do the topic justice. Suffice it to say, the stories told by psychologist Robert Epstein and Jordan Peterson (both of whose email and youtube accounts were suddenly shut down without explanation and without recourse) are very frightening.

Epstein recounts scientific studies which show that the the mere order in which search results are listed (and whether or not even a single one of them contains any negatives regarding a candidate) easily sway the opinions of a randomly-selected, undecided group of people. This alone should give great pause as to how we view these companies.

In addition, the tension between acting as neutral forums vs. publishers is explained and fleshed out. Today, we have the intolerable situation where Google and Facebook are regularly, if sometimes surreptitiously, acting as unregulated publishers by editing much of what they offer online. Even while doing so, they claim to be mere neutral entities, not responsible for what they show (or do not, by intentionally suppressing them!) in their links. The situation as it now exists, this documentary makes clear, must not continue.

After hearing the tales of how their email accounts were suddenly gone because they said thing Google didn’t like, I have decided it is time to migrate off of Gmail (I stopped using Facebook years ago after giving it a try and finding it “creepy”). The risk of losing all my mail as a result of political speech disliked by Google, in its arrogance (they scan every word, including discarded drafts!!), I find to be intolerable. I also find it intolerable to support a company (as the product that I am, not a customer) which has incorporated evil into the very heart of its business model. If you think I exaggerate, please watch the film, available for free on Amazon Prime.

8+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Saturday Night Science: Life After Google

“Life after Google” by George GilderIn his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.

Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.

The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.

This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.

As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.

The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers’ sites but did not publish themselves).

While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.

All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody’s data and resell their computing power and access at attractive prices.

As a example of the magnitude of the economies of scale we’re talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon’s making money on the deal.

This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.

As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we’re all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.

The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user’s behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.

The fundamental flaw in Karl Marx’s economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author’s view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it’s easier to understand the politics of the barons of Silicon Valley.

The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I’m not going there), but it’s profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I’ll discuss that later.) As with physical damage, top-down censorship does not work because there’s no top.

But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators’ revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn’t it great!’ ” Today there’s a top, and those on top control the speech of everything that flows through their data silos.

This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users’ most precious possessions: their time and attention) were in large part the consequence of the Internet’s lacking three fundamental architectural layers: security, trust, and transactions. Let’s explore them.

Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today’s Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it’s bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users’ security.

Trust. As indispensable as security is knowing to whom you’re talking. For example, when you connect to your bank’s Web site, how do you know you’re actually talking to their server and not some criminal whose computer has spoofed your computer’s domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it’s vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.

Transactions. Business is all about transactions; if you aren’t doing transactions, you aren’t in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user’s “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they’d have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.

So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users’ personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we’ve put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.

At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.

Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:

As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you’ll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they’ll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:

This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it’s way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn’t always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.

There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that’s so it simply means we’ll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.

He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don’t get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn’t much difference between a very slowly increasing supply and a static one. If you’re on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn’t matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.

Gilder further argues, as he did in The Scandal of Money, that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I’ll take his word on the latter point; it’s his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.

Gilder’s books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn’t make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.

Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.

Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.

This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.

12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Too Hot for YouTube

Pat Condell’s most recent video, “A Word to the Google Feminists”, was removed by YouTube two hours after it was posted.  It has since been re-posted on LiveLeak, BitChute, and PewTube.  None of these sites supports embedding video (or if they do, they’ve made it sufficiently obscure that I can’t find the links), but you can view the video by clicking the links above.

13+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar