OK, the last post about standards drifted way off topic, or so it seemed to some. I tried to get a screen grab of an interview with the owner as seen on FOX News. Since I could not get a direct link to the clip, I grabbed it and reduced it in size to post. Unfortunately the video clip is still too large, even after I reduced the resolution by 50%, so here is the audio from the clip. The video just included stock footage that many have seen before. The point is that he took the effort to exceed standards, deeper pilings, special windows and accepting the fact that the first floor would be swept away.
In his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.
Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.
The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.
This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.
As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.
The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers’ sites but did not publish themselves).
While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.
All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody’s data and resell their computing power and access at attractive prices.
As a example of the magnitude of the economies of scale we’re talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon’s making money on the deal.
This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.
As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we’re all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.
The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user’s behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.
The fundamental flaw in Karl Marx’s economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author’s view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it’s easier to understand the politics of the barons of Silicon Valley.
The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I’m not going there), but it’s profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I’ll discuss that later.) As with physical damage, top-down censorship does not work because there’s no top.
But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators’ revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn’t it great!’ ” Today there’s a top, and those on top control the speech of everything that flows through their data silos.
This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users’ most precious possessions: their time and attention) were in large part the consequence of the Internet’s lacking three fundamental architectural layers: security, trust, and transactions. Let’s explore them.
Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today’s Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it’s bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users’ security.
Trust. As indispensable as security is knowing to whom you’re talking. For example, when you connect to your bank’s Web site, how do you know you’re actually talking to their server and not some criminal whose computer has spoofed your computer’s domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it’s vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.
Transactions. Business is all about transactions; if you aren’t doing transactions, you aren’t in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user’s “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they’d have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.
So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users’ personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we’ve put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.
At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.
Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:
As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you’ll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they’ll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:
- Brave Web browser and Basic Attention Token
- Golem distributed supercomputer
- OTOY RNDR rendering network
This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it’s way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn’t always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.
There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that’s so it simply means we’ll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.
He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don’t get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn’t much difference between a very slowly increasing supply and a static one. If you’re on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn’t matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.
Gilder further argues, as he did in The Scandal of Money, that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I’ll take his word on the latter point; it’s his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.
Gilder’s books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn’t make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.
Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.
Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.
This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.
Peter Thiel created the 1517 Fund, named after the year when Martin Luther nailed his 95 Theses to the door of All Saints’ Church in Wittenberg (this story may be apocryphal, but the theses were printed and widely distributed in 1517 and 1518). Thiel’s fund invests in ventures created by young entrepreneurs, most beneficiaries of Thiel Fellowships, who eschew or drop out of the higher education fraud to do something with their lives before their creativity is abraded away by the engine of conformity and mediocrity that present-day academia has become.
The goal of these ventures is to render impotent and obsolete the dysfunctional scam of higher education. There is no richer target to be disrupted by technology and honesty than the bloated academia-corporate-government credential shakedown racket that has turned what could have been a productive generation into indentured debt slaves, paying off worthless degrees.
On the (approximate) 500th anniversary of Luther’s 95 Theses, the 1517 Fund published “The New 95”: blows against the fraudulent academic empire. Here are a few choice excerpts.
15. Harvard could admit ten times as many students, but it doesn’t. It could open ten more campuses in different regions, but it doesn’t. Elite schools are afraid of diluting brand equity. They’re not in the education business. They’re in the luxury watch business.
19. In 1987, the year Stephen Trachtenberg became president of George Washington University, students paid $27,000 (in 2017 dollars) in tuition, room, and board. When he retired twenty years later, they paid more than double — close to $60,000. Trachtenberg made GW the most expensive school in the nation without improving education at all. The degree “serves as a trophy, a symbol,” he said. “I’m not embarrassed by what we did.” There are buildings on campus named after this guy.
28. The problem in schooling is not that we have invested too little, but that we get so little for so much.
36. There’s no iron law of economics that says tuition should go up — and only up — year after year. By many measures, universities are the same or worse at teaching students as they were in the early 1980s. But now, students are paying four times as much as they did then. Imagine paying more every year for tickets on an airline whose planes flew slower and crashed more frequently, but that spent its revenue on one hell of a nice terminal and lounge instead. Would you put that sticker on your car’s back window?
48. The people who give exams or evaluate essays and the people who teach should not be one and the same. Creating the best content for people to learn and creating a system to certify that people have achieved some level of mastery are two different problems. By fusing them into one, universities curtail freedom of thought and spark grade inflation. Critical thinking is currently mistaken for finding out what the professor wants to hear and saying it.
57. Professors should be better than snowmen. Snowstorms cancelling class tend to bring more joy to students than learning new ideas. What a strange service! Higher education, root canals, rectal exams, and schooling are the only services that consumers rejoice in having cancelled.
78. Every academic and scientific journal should be open and free to the public. It is much easier to check results for reproducibility with a billion eyes.
84. Too much of school is about proving that you can show up every day on time, work, and get along with the people around you.
This is a “for what it’s worth” posting…
I did a search and in just a few minutes I came up with a court document that stated that a judge in one part of the foreclosure proceedings was indeed Judge Martha G Kavanaugh.
A link will not take one directly to the document, but rather one is intercepted by the State of Maryland court and when there, just agree, (after all this is for informational purposes), finally enter the case number 156006V
Scroll down in the document tracking to number 9.
Is there a connection worthy of ruining a man’s life because of a ruling by one of his parents?
A further search on Google maps came up with an image of the property:
Now since I’m not a lawyer, I can not say what exactly transpired, if someone who reads this is learned in these terms, please enlighten me, (and others that may need enlightening), what exactly does this mean?
|Docket Date:||02/04/1997 Docket Number: 10|
|Docket Type:||Ruling Status: Granted|
|Ruling Judge:||KAVANAUGH, MARTHA G|
|Reference Docket(s):||Motion: 9|
|Docket Text:||ORDER OF COURT (KAVANAUGH, J./RICE, M.) THAT THE VOLUNTARY MOTION TO DISMISS IS HEREBY GRANTED WITH PREJUDICE AND THAT THE BOND FILED BY HARRY J. KELLY AS TRUSTEE SHALL BE RELEASED AND RETURNED FILED. (COPIES MAILED (|
Thank you, in advance, for your time in reading this and thank you, in advance, for your opinions on this subject and the information provided.
TKC 1101, you were right. You have been consistently correct on the American economy. I have enjoyed your anecdotal approach to the underlying economy, the bottom-up view that told us that President Trump was on the right track to put America back to work and make America great again. The gang of smart people at BallDiamondBall are similarly vindicated.
You are going to enjoy this article, even though you have to go to National Review to read it. Deroy Murdoch has a great article up, titled “Obama Didn’t Build That.” D. Murdoch uses a few good illustrations to completely dismantle Obama’s efforts to take credit for the Trump economy.
Murdoch’s illustrations come from a White House press conference that was held on September 10. Sarah H. Sanders hosted while Kevin Hassett spoke. K. Hassett is the Chairman of the White House Council of Economic Advisers. I will provide links in the comments.
The Trump economy rocks.
I had a good laugh listening to NPR’s “Marketplace” coverage of the stock market highs. They tried to look past all those bright thick silver linings to find little patches of dark cloud that they could focus on. Other NPR shows have been very similar for many months. They try so earnestly to let us know that most of the news is bad if you just know where to look. They have to look really hard, and they can generally find some metrics that are not much different than they were under Obamanomics. I enjoy the transparency of their frustration. But “Marketplace” had me howling as they explained that other markets are down while ours is up, tried to brush it off, and then admitted that they are perplexed that, considering the international trade war, global investors are betting on Trump and America.
Here is an excerpt from the Politico, who seem to be cheering for a “Blue Wave” in November:
Hassett denied his appearance was prompted by a Friday speech in which Obama said, “When you hear how great the economy’s doing right now, let’s just remember when this recovery started.” The current economic expansion began in mid-2009, six months into Obama’s first term.
Trump replied shortly afterward at an appearance in Fargo, N.D.: “He was trying to take credit for this incredible thing that’s happening. … It wasn’t him.”
The dispute goes to the heart of Trump’s arguments this fall. Facing ugly projections for a GOP rout, the president is trying to persuade voters to stick with Republicans by arguing they’ve delivered an economic turnaround. But many major gauges on economic growth and job growth were just as strong during parts of the Obama years, even without Trump‘s deregulation and deficit-boosting tax cuts.
“One of the hypotheses that’s been floating around,” Hassett said in the briefing, “is that the strong economy that we’re seeing is just a continuation of recent trends.” But “economic historians will 100 percent accept the fact that there was an inflection at the election of Donald Trump, and that a whole bunch of data items started heading north.”
TKC 1101 has been bringing us an occasional cheery message from his experiences as a business consultant. He anticipated the trend, spotted early signs, and has kept us posted with updates. I really enjoy those posts. Thank you for the encouragement.
My own experience is in a sector that lags the general economy. The vibe has been positive for months, and it is beginning to show up in terms of real work.
Go, Trump, go !
Well the saga continues, I posted, (see BLUE text ), on several “who called me” sites because another scammer got through NoNoRobo. Why did they get through? Well this scammer, like many others, fakes the caller ID to that of a somewhat local number. But the number they said to call was from Florida. So NoMoRobo did not recognize the (local) number as a scammer or robo-call (telemonster).
The message on my recorder said; “This message is intended for Jolene. I’m calling in regards to a pending matter that is being in the process of being reviewed today. I’m also calling to verify that we do have the correct address on file for this individual. To avoid any further proceedings at this time you have the right to contact the information Center. Should you wish to contact them the contact number listed as 561-223-6950 and you will have to reference your file number 16112.”
The message I posted on several web sites was; “called left this number to call back, asked for someone by first name that I never heard of except in Dolly Parton song, “Jolene”, LOL. Took the number they called from and forwarded it to the number they gave, let them get a taste of their own medicine. If you have XFINITY, you can do this free of charge, forward scammer calls back to themselves. Hope it ties up their call center!”
I hope XFINITY customers that are plagued by these calls do the same.
Below you will see two examples of portable storage media. So what’s next? Or what’s next that I can afford? Or will I need it?
(The 64 Gig thumb drive was in my pocket, it accidentally took a swim in the washing machine and survived a wash and two rinse cycles, not to forget the three spin cycles!)
An amazing abuse of government power has been uncovered at the U.S. Patent and Trademark Office. Apparently, a secret program existed until recently to flag potentially controversial patents and refuse to issue them, even if they met all the usual requirements.
“The patent office has delayed and delayed. I’m finally hoping to get to the board of appeals and to the courts to stop the delays and get my patents issued,” Hyatt said in our interview.
He alleges that the SAWS program, which was started in 1994 not long after Hyatt’s case generated huge publicity, disproportionately targeted individual inventors or small businesses. The SAWS program came to light in late 2014, and Hyatt and his attorneys allege that patent office officials used the program to secretly exercise powers with which they were never vested by law, allowing them to choose winners and losers in controversial patent cases.
A patent with the SAWS designation could not be issued without approval from higher ups, and Hyatt found that his applications were designated as such. Patent examiners were instructed not to tell applicants that their applications had the SAWS flag. After getting 75 patents, Hyatt did not get another one since the 1990s, and for a long time he didn’t know why.
Private companies’ content censorship raises important public concerns of a magnitude meriting book-length treatment. Not here, however, and not by me. The left, for example, saw its near-absolute content control of most public media – print, broadcast, movies, education – as insufficient because of talk radio. Leftist radio programs fell flat while Rush Limbaugh, intolerably, soared to prominence. We know that tolerance has a very restricted meaning for leftists, thus their regulatory effort to quash conservative talk radio with the “fairness doctrine” was a case study in the use of state power in furtherance of their illiberal – totalitarian, actually – impulses and tactics.
The left never hesitates to enforce its rubrics, on pain of abusive name-calling (amplified by their “media”) or ruination at the hands of some public agency or other with enforcement powers. For instance, a Christian baker in Colorado is being singled out yet again. All sense of proportion has been lost, to such an extent that definitions of basic language and process must be re-examined. Does what we have referred to as media up until now still qualify as media?
Are newspapers and TV newscasts merely neutral means of communication for all or do they now zealously advocate one single worldview, to the vituperous exclusion of all others? It is no longer merely a medium when the New York Times “news” pages are blatantly editorial and read like daily DNC talking points. Do administrative agencies, whose rules are enacted at every level – federal, state and local – by leftist activists (who are the pervasive and permanent denizens of these administrative swamps) really represent the will of the voting majority? There are literally scores of thousands of such rules – many with huge fines or even prison sentences for non-compliance – at every level of government, so that virtually anyone could be ruined by merely coming to the attention of a “public servant” with an axe to grind – particularly vis-à-vis an uppity, outspoken conservative. Legislative or judicial oversight of such agencies, as a practical matter, is non-existent.
While it would be a terrible idea to attempt to impose a “fairness doctrine” on Silicon Valley, I am heartened that President Trump tweeted today on the subject of censorship of conservative viewpoints by social media and said “…we won’t let that happen”. As a proponent of small government, I do not advocate promiscuous use of state power to right all wrongs. However, the situation today is intolerable. With the status quo – where we cannot even be heard to object – we can only lose our rights. The power of the state is being used regularly to stifle non-progressive speech and this is being perpetrated in part by state-sanctioned companies with monopoly power. Trump’s statements are useful push-back and very necessary, as the progre$$ive $ilicon Valley types have had a free ride up until now, doing as they like to squash our views.
While I am not thrilled with use of state power generally, one of its necessary powers is to “secure” our fundamental rights – like freedom of political speech. Maybe we ought to recall Obama’s rejoinder that, “You didn’t build that…” These huge companies, to some extent after all, exist at the sufferance of the entire public and the state functionaries which represent us. It is unacceptable for companies with monopoly power to censor speech with which they disagree and to do it by subterfuges such as “offensive” or contrivances like “hate speech”. Although they are private companies and do have substantial commercial rights, such rights are not without limit and may not legitimately be used to infringe fundamental personal (and essentially political) freedom of speech rights of millions of individuals. To say otherwise is to make the Constitution into a suicide pact for conservatives and libertarians..
It is high time these behemoths began to fear negative consequences for some of their business practices, including censorship. If he chooses, President Trump can make their lives difficult and their bottom lines shrink by executive actions (and not necessarily executive orders). It is time, I think, to set the Department of Justice about the task of examining antitrust aspects of the business practices of Google (YouTube), Facebook, Twitter, Amazon, Apple, etc. The exercise will likely prove salutary.
A comment John made (#18) on a recent post by 10 cents (“Programming Question”), reminded me I had reviewed one of John’s books. The review was posted a while back on the legacy site. As this is one of the most worthwhile books I have ever read, I thought it should be posted it here.
A work of non-fiction is understood in a context. A great work actually articulates the context before anybody else gets it. A review of such a book may go seemingly far afield, if the book’s power can be construed to provoke and, indeed, license the inspired musings of its readers. Such is the case here, as “The Autodesk File”’s roots are deep in the intellectual, technological, economic, financial, and even spiritual soil of this, the spring garden of the information age.
When was the last time you couldn’t put down a book which had not a single murder, courtship, love or sex scene? OK, I’m not counting some ancillary trysts consisting of mergers and takeovers, which some might construe as sexy, or at least allude to being on the receiving end of a certain Anglo-Saxon gerund. This book contains no obscenities, save a rare mention of taurine spoor. That serves as a welcome reminder: important ideas and even emotions are amenable to description sans vulgarity.
Lest one think this a narrow commercial exposition, “The Autodesk File” is in the public domain in multiple formats. Neither is it a mere exposition of commerce. About half way through, amidst essays explaining the nature of businesses dealing in intellectual property (rather than capital-intensive equipment), the reader is treated to a short science fiction story whose theme is no less than a plausible tale of the origin of human life. Our bodily construction is, after all, prescribed in lines of code, albeit compressed into helixes wound around themselves then wrapped around histones. Like some of their software counterparts, they, too, must be unzipped before use.
Also punctuating this eclectic opus are quotes from Aristophanes. It is a tour de force, a truly awe-inspiring account of much more than the building and workings of one trailblazing company. It encapsulates the noblest of human aspirations, idealizations, creativity, ingenuity and critical self-examination; inescapable is the conclusion that voluntary cooperation and exchange of ideas, knowledge and capital is a great boon to the world at large. If a business is built to serve the needs of customers by creating products of the highest possible quality, greed is not a good; it is irrelevant. Also inescapable is the perhaps ironic conclusion that ongoing success requires continual vigilance, lest arrogance take hold. The fruition of critical self-examination can be seen in renewal of that same humility which was so essential in powering that first whiff of success.
Nonetheless, apart from arcane sections dealing with technical matters of computer hardware and programming (these, too, may be great for the cognoscenti; this writer simply knows too little), this book is a spellbinder. Readers may be surprised to be persuasively regaled with the fundamentals of various disciplines, including economics, finance, taxation, corporate law, engineering, computer science, thermodynamics, rocket science, quantum mechanics, cosmology and the nature of reality. That is, readers who don’t know John Walker. For those who do, none of this is surprising.
Have you ever had a million dollar idea? I have – lots of ‘em. Have I turned even one of those ideas into a product? Nope. Why not? Because I lacked the understanding, the talent, and the single-minded discipline to even get one idea off the ground. This book, edited by Ratburger’s own John Walker (himself author of most of the collected writings), is a chronicle of birth, growth, crises and maturation of Autodesk Inc., whose products helped unleash the creativity and productivity of millions of people. It did so beginning with a key insight: that the infant personal computer was a general tool and not a specific workstation. As a general tool, through the intelligent design of software, it would rapidly evolve in utility in virtually every field of endeavor, beginning with design. Design, in this line of thinking, is a logical first step down the path which aims, eventually, to capture all of reality in the box we call a computer. This stunning insight occurred while all the rest of us still went through our days typing on an IBM Selectric, without once even using the word “computer.” Way back then in 1980, virtually none of us thought about computers or any of the other words and things without which our lives today would be unimaginable. Historically speaking,1980 happened yesterday.
An additional insight guided Autodesk’s ethos: that personal computers would grow exponentially in processing power and become useful by ordinary people (with no computer or programming skills) to undertake virtually any task. Autodesk’s first product, AutoCAD, moved design from a small number of dedicated, expensive CAD workstations operated by highly-trained people, to desks virtually everywhere where drawing might be needed. In the process of “squeezing too much code into too small a box,” Autodesk did not compete with previous generations of single-purpose CAD workstations which cost 10 – 50X as much. Instead, it created and increased a market for CAD by the same orders of magnitude, by bringing this tool to the 98% of designers and draftsmen who could not afford dedicated CAD workstations.
In less than one year, this new company had a hit product. Time to rest on one’s laurels? How about after the IPO? Time to coast? Not quite. Going into the CAD business – and that is the business, as opposed to the software business (read the book to learn why), is something like launching a rocket from Earth and hoping to land on a comet and send back data – all except that the precise trajectory of the comet cannot be known, and its surface material and contours are completely unknown. The difficulties were perhaps not unlike those encountered by the ESA’s $1.8 billion Rosetta/Philae spacecraft which did rendezvous and land on comet 67P. Philae’s tether harpoons failed to fire, so the probe bounced and wound up in a permanently-shaded spot (due to an unanticipated hard surface, they likely would not have worked anyway), preventing use of solar power. Batteries enabled an estimated 80% overall mission success. AutoCAD’s launch – with $59,000 in capital, mid-course hardware and software corrections and “landing” on users, by contrast, remains successful to this day.
“The Autodesk File” attributes success to the company’s understanding that it represented what it coined “The New Technological Corporation.” This is an an enterprise which does not conform to traditional capital-intensive business, as it can deploy intellectual, debt-free leverage. Such businesses embrace an unpredictable but essential element: “wild talent.” This talent is a necessary but not sufficient condition for success when it comes to creating software, which is unlike most all prior businesses. Rather than capital, such entities require a peculiar kind of talent – one which grasps the present desires of a market, knows what is possible with present hardware and the correctly plots the trajectories of both the market and evolving hardware. I believe it to be objectively true that the editor is faithfully and humbly describing the truly awe-inspiring talent he, himself, brought to Autodesk. Other such individuals, like Jobs or Gates, are known in the early computer and software businesses. Few, however, have operated as willing members of an extended team with humility, dedication to excellence and human decency. If nothing else, “The Autodesk File” shows how this can be accomplished.
Attempts to find individuals with “wild talent” are most difficult, maybe impossible. “Wild talent” illustrates the essential difference between aggregate information, traditionally used by analysts to “value” companies which trade on public exchanges, and actual events which take place within any company. For instance, money spent on R&D is aggregate data which subsumes the activities of many employees of a given company. Whether it means the company will grow really depends on what individual employees accomplish. When it come to software, the outcome will be notably different for R&D teams which play it safe versus ones which continually push the envelope of what may be remotely possible. Intellectual leverage is such that the cost of failure of 8 out of 10 ideas is far outweighed by success in only 1 or 2 of them. The presence of such loyal individuals is also a bulwark against hostile takeovers. You can lead a programmer to the R&D department, but you can’t make him plink – at least not in the way which is essential to success.
Perhaps most revealing about this unusual book is the ongoing critical self-examination engaged in by the primary author. These analyses were distilled into the form of internal company communications as essays and information letters. At many points in the journey, the author is able to adumbrate the – sometimes previously un-articulable – principles which guided his often momentous insights. These usually arose in chaotic circumstances with incomplete information. The essential humility of this approach is demonstrated at various points in the book. Repeatedly, the author makes clear the importance of open communication and understanding of the roles of all the other parts of the company. A programmer, for example, must understand management’s plan, what customers want, how a product will be marketed and shipped, what competitors are doing, etc. Only then can a “wild talent” be effective.
“The Autodesk File” is a much-needed reminder that human beings are still capable of doing awe-inspiring, creative and even noble things; that they can voluntarily collaborate and, working in their own self-interest, set off endless waves of non-zero sum games in their wakes. This is also a success story, then, a chain of decisions, clearly rooted in the philosophy of Classical Liberalism – in some of its untidy and altogether messy human details. Without aiming to, this story affirms the primacy and value of the individual, both as producer and consumer; it convincingly shows that communication – positive and negative feedback – between individual, voluntary buyers and sellers – is the essence of what a market is. This is in contrast to statist dirigisme, where aggregate data and arrogance rule, in derogation of the value of the individual.
Diametrically opposed to today’s received collectivist wisdom, “The Autodesk File” shows how individuals create markets where none previously existed, to the betterment of all. From those roots emerge timeless operating principles: 1. build the best products, period – with open architecture so as to invite developers to customize and find as yet undreamed uses (an essential form of feedback for software companies), thereby further expanding markets; 2. invite, quickly assess and respond to this feedback from customers in the form of improved new releases; 3. employ owners, not merely ‘investors’ – pay well for results – with ownership whenever possible. This is a story which demonstrates the huge difference between owners, whose time preference is long and investors focused only on the forecast for the next fiscal quarter. The tyranny of industry analysts, a form of economic lunacy where short time preference is brutally and pervasively enforced on behalf of “investors,” operates so as to threaten the short-term existence of sound public companies which actually attempt to pursue the best long-term business practices.
In a somewhat philosophic interview around the tenth anniversary of Autodesk, the author/editor describes the operation of a new “design cult” of engineering as a “form of creationism, which thinks its members are so omniscient that they have no need for market-driven evolution to perfect their efforts.” This view, coupled with the information letters, again displays an essential humility in the ethos of Autodesk. Management must lead toward explicit goals. Every part of the organization must understand and communicate with all others, particularly as it affects product development. This is not the typical hierarchical corporate ethos. Neither is it anarchy. Management must lead, but not without listening, understanding and explaining.
It is difficult for this writer to refrain from drawing parallels to the author’s description of this “design cult” of engineering. Such an attitude is not surprising, given that we live in a society which increasingly and officially denies the existence of a supreme being, while at the same time acting – through a “cult” of increasingly centralized authoritarian government – as though it were omniscient and omnipotent; as though its policies have no unintended consequences; as though no cost is too high to accomplish its goals, whose only feedback is its own reverberating positive-feedback echo chamber. It is hard to know which cult is imitating which. In either case, the state-erected obstacles to starting and running a business, while not emphasized, are on display in this epic. This common ethos of the state and large corporations has inevitably given us today’s pernicious corporatism.
It may be that the most significant intellectual error of our time is the belief that society can be modeled and manipulated as well as physical reality now can be, thanks in large part to private companies like Autodesk. Unlike government, though, companies are forced to relearn their limits – i.e., lessons in humility are given, at least annually, and enforced as necessary by balance sheets and owners. The fear of going out of business would be a highly salutary fear for modern government to experience. Instead of a healthy humility, however, the state often displays antipathy toward private enterprise – ironically, the very source of its own financial power. The public relations nature of this attitude likely represents either envy of private successes and/or virtue signaling in an effort to garner votes in the incessant lust for yet more power.
God is traditionally described as a jealous God. Do you suppose that our deity/government has its own version of the Ten Commandments, the first of which explains its animus toward private enterprise? “Thou shalt have no other Gods before Me…” …otherwise put, “Trust me. I’m from the government.” “I’m here to protect you from those big, bad, corporations.”
Thus, as you may see for this reader, the story of Autodesk led to much contemplation of human nature and the whole spectrum our interactions – both voluntary and coercive. It is an inspiring and epic tale of the utility and nobility of voluntary cooperation.
“The Autodesk File” is in the public domain. It is available in several downloadable versions. All formats are accessible here: http://www.fourmilab.ch/autofile/
I write a weekly book review for the Daily News of Galveston County. (It is not the biggest daily newspaper in Texas, but it is the oldest.) My review normally appears Wednesdays. When it appears, I post the review here on the following Sunday.
‘Shale Boom’ an even-handed look at fracking
By MARK LARDAS
July 24, 2018
“Shale Boom: The Barnett Shale Play and Fort Worth,” by Diana Davids Hinton, Texas Christian University Press, 2018, 192 pages, $30
Twenty years ago, the United States was running out of oil and gas. Fracking changed everything. Today, the United States is the world’s largest producer of petroleum products.
“Shale Boom: The Barnett Shale Play and Fort Worth,” by Diana Davids Hinton tells the history of a key part of that transformation. It examines how the Barnett Shale helped trigger the fracking revolution, and explores its consequences.
Hinton puts fracking in its historical context. It was not new. Some form of fracturing was done as early as the 1920s. This included injecting liquids into wells under high pressure — hydraulic fracturing. Hinton reveals what was new. The Barnett Shale is a large but narrow layer of oil bearing rock beneath Fort Worth and the area west of it. Fracking techniques of the 1980s and 1990s meant wells failed to yield economic levels of gas and oil.
George Mitchell owned lease rights in the area. Hinton shows how the Galveston-born Mitchell financed new fracking techniques. The new technology unlocked the Barnett Shale, producing unprecedented levels of natural gas. Directional drilling techniques developed during this century’s first decade multiplied yields.
It kick-started a shale gas boom around Fort Worth. Much of the best yield area was under Fort Worth, complicating things. What followed included some craziness of the type accompanying every oil boom. Hinton traces the action.
Hinton looks at the impact urban drilling had on both drillers and residents. She also examines the bust inevitably following a boom, the backlash against drilling, and the impact of environmental concerns fueled by fear of fracking.
Hinton is refreshingly even-handed. She looks at both the benefits and costs (societal and environmental as well as financial) of drilling and the hydrocarbon industry. She also explores both the benefits and excesses of environmental opposition to fracking. Hinton is unafraid to expose the follies and dodgy activities of individuals in both drilling and the environmental movement.
Hinton closes with an examination of the impacts of fracking — long and short term — around Fort Worth, and its global implications. “Shale Boom” a fascinating and balanced look at what technology revolutions yield.
Mark Lardas, an engineer, freelance writer, amateur historian, and model-maker, lives in League City. His website is marklardas.com.
24% decline after earnings report:
Couldn’t have happened to a more vile company. Schadenfreudelicious.
The drawing of blood for laboratory tests is one of my least favourite parts of a routine visit to the doctor’s office. Now, I have no fear of needles and hardly notice the stick, but frequently the doctor’s assistant who draws the blood (whom I’ve nicknamed Vampira) has difficulty finding the vein to get a good flow and has to try several times. On one occasion she made an internal puncture which resulted in a huge, ugly bruise that looked like I’d slammed a car door on my arm. I wondered why they need so much blood, and why draw it into so many different containers? (Eventually, I researched this, having been intrigued by the issue during the O. J. Simpson trial; if you’re curious, here is the information.) Then, after the blood is drawn, it has to be sent off to the laboratory, which sends back the results days later. If something pops up in the test results, you have to go back for a second visit with the doctor to discuss it.
Wouldn’t it be great if they could just stick a fingertip and draw a drop or two of blood, as is done by diabetics to test blood sugar, then run all the tests on it? Further, imagine if, after taking the drop of blood, it could be put into a desktop machine right in the doctor’s office which would, in a matter of minutes, produce test results you could discuss immediately with the doctor. And if such a technology existed and followed the history of decline in price with increase in volume which has characterised other high technology products since the 1970s, it might be possible to deploy the machines into the homes of patients being treated with medications so their effects could be monitored and relayed directly to their physicians in case an anomaly was detected. It wouldn’t quite be a Star Trek medical tricorder, but it would be one step closer. With the cost of medical care rising steeply, automating diagnostic blood tests and bringing them to the mass market seemed an excellent candidate as the “next big thing” for Silicon Valley to revolutionise.
This was the vision that came to 19 year old Elizabeth Holmes after completing a summer internship at the Genome Institute of Singapore after her freshman year as a chemical engineering major at Stanford. Holmes had decided on a career in entrepreneurship from an early age and, after her first semester told her father, “No, Dad, I’m, not interested in getting a Ph.D. I want to make money.” And Stanford, in the heart of Silicon Valley, was surrounded by companies started by professors and graduates who had turned inventions into vast fortunes. With only one year of college behind her, she was sure she’d found her opportunity. She showed the patent application she’d drafted for an arm patch that would diagnose medical conditions to Channing Robertson, professor of chemical engineering at Stanford, and Shaunak Roy, the Ph.D. student in whose lab she had worked as an assistant during her freshman year. Robertson was enthusiastic, and when Holmes said she intended to leave Stanford and start a company to commercialise the idea, he encouraged her. When the company was incorporated in 2004, Roy, then a newly-minted Ph.D., became its first employee and Robertson joined the board.
From the outset, the company was funded by other people’s money. Holmes persuaded a family friend, Tim Draper, a second-generation venture capitalist who had backed, among other companies, Hotmail, to invest US$ 1 million in first round funding. Draper was soon joined by Victor Palmieri, a corporate turnaround artist and friend of Holmes’ father. The company was named Theranos, from “therapy” and “diagnosis”. Elizabeth, unlike this scribbler, had a lifelong aversion to needles, and the invention she described in the business plan pitched to investors was informed by this. A skin patch would draw tiny quantities of blood without pain by means of “micro-needles”, the blood would be analysed by micro-miniaturised sensors in the patch and, if needed, medication could be injected. A wireless data link would send results to the doctor.
This concept, and Elizabeth’s enthusiasm and high-energy pitch allowed her to recruit additional investors, raising almost US$ 6 million in 2004. But there were some who failed to be persuaded: MedVentures Associates, a firm that specialised in medical technology, turned her down after discovering she had no answers for the technical questions raised in a meeting with the partners, who had in-depth experience with diagnostic technology. This would be a harbinger of the company’s fund-raising in the future: in its entire history, not a single venture fund or investor with experience in medical or diagnostic technology would put money into the company.
Shaunak Roy, who, unlike Holmes, actually knew something about chemistry, quickly realised that Elizabeth’s concept, while appealing to the uninformed, was science fiction, not science, and no amount of arm-waving about nanotechnology, microfluidics, or laboratories on a chip would suffice to build something which was far beyond the state of the art. This led to a “de-scoping” of the company’s ambition—the first of many which would happen over succeeding years. Instead of Elizabeth’s magical patch, a small quantity of blood would be drawn from a finger stick and placed into a cartridge around the size of a credit card. The disposable cartridge would then be placed into a desktop “reader” machine, which would, using the blood and reagents stored in the cartridge, perform a series of analyses and report the results. This was originally called Theranos 1.0, but after a series of painful redesigns, was dubbed the “Edison”. This was the prototype Theranos ultimately showed to potential customers and prospective investors.
This was a far cry from the original ambitious concept. The hundreds of laboratory tests doctors can order are divided into four major categories: immunoassays, general chemistry, hæmatology, and DNA amplification. In immunoassay tests, blood plasma is exposed to an antibody that detects the presence of a substance in the plasma. The antibody contains a marker which can be detected by its effect on light passed through the sample. Immunoassays are used in a number of common blood tests, such the 25(OH)D assay used to test for vitamin D deficiency, but cannot perform other frequently ordered tests such as blood sugar and red and white blood cell counts. Edison could only perform what is called “chemiluminescent immunoassays”, and thus could only perform a fraction of the tests regularly ordered. The rationale for installing an Edison in the doctor’s office was dramatically reduced if it could only do some tests but still required a venous blood draw be sent off to the laboratory for the balance.
This didn’t deter Elizabeth, who combined her formidable salesmanship with arm-waving about the capabilities of the company’s products. She was working on a deal to sell four hundred Edisons to the Mexican government to cope with an outbreak of swine flu, which would generate immediate revenue. Money was much on the minds of Theranos’ senior management. By the end of 2009, the company had burned through the US$ 47 million raised in its first three rounds of funding and, without a viable product or prospects for sales, would have difficulty keeping the lights on.
But the real bonanza loomed on the horizon in 2010. Drugstore giant Walgreens was interested in expanding their retail business into the “wellness market”: providing in-store health services to their mass market clientèle. Theranos pitched them on offering in-store blood testing. Doctors could send their patients to the local Walgreens to have their blood tested from a simple finger stick and eliminate the need to draw blood in the office or deal with laboratories. With more than 8,000 locations in the U.S., if each were to be equipped with one Edison, the revenue to Theranos (including the single-use testing cartridges) would put them on the map as another Silicon Valley disruptor that went from zero to hundreds of millions in revenue overnight. But here, as well, the Elizabeth effect was in evidence. Of the 192 tests she told Walgreens Theranos could perform, fewer than half were immunoassays the Edisons could run. The rest could be done only on conventional laboratory equipment, and certainly not on a while-you-wait basis.
Walgreens wasn’t the only potential saviour on the horizon. Grocery godzilla Safeway, struggling with sales and earnings which seemed to have reached a peak, saw in-store blood testing with Theranos machines as a high-margin profit centre. They loaned Theranos US$ 30 million and began to plan for installation of blood testing clinics in their stores.
But there was a problem, and as the months wore on, this became increasingly apparent to people at both Walgreens and Safeway, although dismissed by those in senior management under the spell of Elizabeth’s reality distortion field. Deadlines were missed. Simple requests, such as A/B comparison tests run on the Theranos hardware and at conventional labs were first refused, then postponed, then run but results not disclosed. The list of tests which could be run, how blood for them would be drawn, and how they would be processed seemed to dissolve into fog whenever specific requests were made for this information, which was essential for planning the in-store clinics.
There was, indeed, a problem, and it was pretty severe, especially for a start-up which had burned through US$ 50 million and sold nothing. The product didn’t work. Not only could the Edison only run a fraction of the tests its prospective customers had been led by Theranos to believe it could, for those it did run the results were wildly unreliable. The small quantity of blood used in the test introduced random errors due to dilution of the sample; the small tubes in the cartridge were prone to clogging; and capillary blood collected from a finger stick was prone to errors due to “hemolysis”, the rupture of red blood cells, which is minimal in a venous blood draw but so prevalent in finger stick blood it could lead to some tests producing values which indicated the patient was dead.
Meanwhile, people who came to work at Theranos quickly became aware that it was not a normal company, even by the eccentric standards of Silicon Valley. There was an obsession with security, with doors opened by badge readers; logging of employee movement; information restricted to narrow silos prohibiting collaboration between, say, engineering and marketing which is the norm in technological start-ups; monitoring of employee Internet access, E-mail, and social media presence; a security detail of menacing-looking people in black suits and earpieces (which eventually reached a total of twenty); a propensity of people, even senior executives, to “vanish”, Stalin-era purge-like, overnight; and a climate of fear that anybody, employee or former employee, who spoke about the company or its products to an outsider, especially the media, would be pursued, harassed, and bankrupted by lawsuits. There aren’t many start-ups whose senior scientists are summarily demoted and subsequently commit suicide. That happened at Theranos. The company held no memorial for him.
Throughout all of this, a curious presence in the company was Ramesh (“Sunny”) Balwani, a Pakistani-born software engineer who had made a fortune of more than US$ 40 million in the dot-com boom and cashed out before the bust. He joined Theranos in late 2009 as Elizabeth’s second in command and rapidly became known as a hatchet man, domineering boss, and clueless when it came to the company’s key technologies (on one occasion, an engineer mentioned a robotic arm’s “end effector”, after which Sunny would frequently speak of its “endofactor”). Unbeknownst to employees and investors, Elizabeth and Sunny had been living together since 2005. Such an arrangement would be a major scandal in a public company, but even in a private firm, concealing such information from the board and investors is a serious breach of trust.
Let’s talk about the board, shall we? Elizabeth was not only persuasive, but well-connected. She would parley one connection into another, and before long had recruited many prominent figures including:
- George Schultz (former U.S. Secretary of State)
- Henry Kissinger (former U.S. Secretary of State)
- Bill Frist (former U.S. Senator and medical doctor)
- James Mattis (General, U.S. Marine Corps)
- Riley Bechtel (Chairman and former CEO, Bechtel Group)
- Sam Nunn (former U.S. Senator)
- Richard Kobacevich (former Wells Fargo chairman and CEO)
Later, super-lawyer David Boies would join the board, and lead its attacks against the company’s detractors. It is notable that, as with its investors, not a single board member had experience in medical or diagnostic technology. Bill Frist was an M.D., but his speciality was heart and lung transplants, not laboratory tests.
By 2014, Elizabeth Holmes had come onto the media radar. Photogenic, articulate, and with a story of high-tech disruption of an industry much in the news, she began to be featured as the “female Steve Jobs”, which must have pleased her, since she affected black turtlenecks, kale shakes, and even a car with no license plates to emulate her role model. She appeared on the cover of Fortune in January 2014, made the Forbes list of 400 most wealthy shortly thereafter, was featured in puff pieces in business and general market media, and was named by Time as one of the hundred most influential people in the world. The year 2014 closed with another glowing profile in the New Yorker. This would be the beginning of the end, as it happened to be read by somebody who actually knew something about blood testing.
Adam Clapper, a pathologist in Missouri, spent his spare time writing Pathology Blawg, with a readership of practising pathologists. Clapper read what Elizabeth was claiming to do with a couple of drops of blood from a finger stick and it didn’t pass the sniff test. He wrote a sceptical piece on his blog and, as it passed from hand to hand, he became a lightning rod for others dubious of Theranos’ claims, including those with direct or indirect experience with the company. Earlier, he had helped a Wall Street Journal reporter comprehend the tangled web of medical laboratory billing, and he decided to pass on the tip to the author of this book.
Thus began the unravelling of one of the greatest scams and scandals in the history of high technology, Silicon Valley, and venture investing. At the peak, privately-held Theranos was valued at around US$ 9 billion, with Elizabeth Holmes holding around half of its common stock, and with one of those innovative capital structures of which Silicon Valley is so fond, 99.7% of the voting rights. Altogether, over its history, the company raised around US$ 900 million from investors (including US$ 125 million from Rupert Murdoch in the US$ 430 million final round of funding). Most of the investors’ money was ultimately spent on legal fees as the whole fairy castle crumbled.
The story of the decline and fall is gripping, involving the grandson of a Secretary of State, gumshoes following whistleblowers and reporters, what amounts to legal terrorism by the ever-slimy David Boies, courageous people who stood their ground in the interest of scientific integrity against enormous personal and financial pressure, and the saga of one of the most cunning and naturally talented confidence women ever, equipped with only two semesters of freshman chemical engineering, who managed to raise and blow through almost a billion dollars of other people’s money without checking off the first box on the conventional start-up check list: “Build the product”.
I have, in my career, met three world-class con men. Three times, I (just barely) managed to pick up the warning signs and beg my associates to walk away. Each time I was ignored. After reading this book, I am absolutely sure that had Elizabeth Holmes pitched me on Theranos (about which I never heard before the fraud began to be exposed), I would have been taken in. Walker’s law is “Absent evidence to the contrary, assume everything is a scam”. A corollary is “No matter how cautious you are, there’s always a confidence man (or woman) who can scam you if you don’t do your homework.”
Carreyrou, John. Bad Blood. New York: Alfred A. Knopf, 2018. ISBN 978-1-984833-63-1.
Here is Elizabeth Holmes at Stanford in 2013, when Theranos was riding high and she was doing her “female Steve Jobs” act.
This is a CNN piece, filmed after the Theranos scam had begun to collapse, in which you can still glimpse the Elizabeth Holmes reality distortion field at full intensity directed at CNN medical correspondent Sanjay Gupta. There are several curious things about this video. The machine that Gupta is shown is the “miniLab”, a prototype second-generation machine which never worked acceptably, not the Edison, which was actually used in the Walgreens and Safeway tests. Gupta’s blood is drawn and tested, but the process used to perform the test is never shown. The result reported is a cholesterol test, but the Edison cannot perform such tests. In the plans for the Walgreens and Safeway roll-outs, such tests were performed on purchased Siemens analysers which had been secretly hacked by Theranos to work with blood diluted well below their regulatory-approved specifications (the dilution was required due to the small volume of blood from the finger stick). Since the miniLab never really worked, the odds are that Gupta’s blood was tested on one of the Siemens machines, not a Theranos product at all.
In a June 2018 interview, author John Carreyrou recounts the story of Theranos and his part in revealing the truth. There is substantial background information in the question and answer period which does not appear in the book.
If you are a connoisseur of the art of the con, here is a masterpiece. After the Wall Street Journal exposé had broken, after retracting tens of thousands of blood tests, and after Theranos had been banned from running a clinical laboratory by its regulators, Holmes got up before an audience of 2500 people at the meeting of the American Association of Clinical Chemistry and turned up the reality distortion field to eleven. Watch a master at work. She comes on the stage at the six minute mark.