Standards II (revisited)

OK, the last post about standards drifted way off topic, or so it seemed to some. I tried to get a screen grab of an interview with the owner as seen on FOX News. Since I could not get a direct link to the clip, I grabbed it and reduced it in size to post. Unfortunately the video clip is still too large, even after I reduced the resolution by 50%, so here is the audio from the clip. The video just included stock footage that many have seen before. The point is that he took the effort to exceed standards, deeper pilings, special windows and accepting the fact that the first floor would be swept away.


Users who have liked this post:

  • avatar
  • avatar

No bear yet…

Boy am I eating crow….

From Oct 16, first at 1245 AM, second at 331 AM and finally at 808 AM.

I’m not sure how big the rack on the buck was, maybe a 7-pointer.

My wife is gloating over my invisible bear…

Like 10+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Weird thoughts on human reproduction

I have no citations for anything about to be written here. I’m not feeling particularly well, so extra lazy is order of the day.

This is just a thought… I could flesh it out, but I like discussion more than research these days :p Did I mention I’m lazy?

Anyway, I read a while back two articles on recent discoveries in genetic science – the first was that we get more DNA from our fathers than our mothers. The second was that environmental factors can actually alter our genetic markers. Fascinating stuff.

I actually really liked these two studies because of the connections – maybe we get more dna from our fathers because their genetic material is more “up-to-date” due to the constant death and creation of male sperm? Ergo, it is more environmentally sound for current living conditions than the maternal genetic material that was produced some 20-40 years ago (eggs are created in utero, hence why maternal age is a big deal in having babies).

So, what if some of the weird “genetic” “born this way” things in current populations might be affected by growing in vitro or sperm bank popularity? These would be preserved, aged genetic material that has not been affected by current environmental factors.

Thoughts?


Users who have liked this post:

  • avatar
  • avatar
  • avatar

Fascinating Online Course: “Simulation Neuroscience”

In at least one of his book reviews, John Walker has linked to the “Blue Brain Project.” This is a multidisciplinary approach to simulating (or perhaps instead emulating) the function of the mammalian (eventually the human) brain. It is being conducted in John’s back yard at the École Polytechnique Fédéral de Lausanne. It is a very ambitious project, presently studying slices of rat somatosensory cerebral cortex containing about 31,000 neurons.

Once upon a time, I was graduate student in anatomy at Rutgers Medical School studying the development and innervation of the superior oblique muscle of the newborn rat using enzyme staining of the motor end-plates and electron microscopy. I also studied neuroscience (then known as neuroanatomy and neurophysiology) and taught gross anatomy.

All the while, I fervently tried to piece together, bottom-up, all the parts of the nervous system, beginning with individual neurons (nerve cells) – their structure, electrical function and connections. As well, I learned the major connections (tracts) between various parts of the central nervous system (=the brain and the spinal cord). In my naiveté, I believed that if I only studied hard enough it would all come together, that an emergent understanding, a gestalt, would result. Wrong. It never happened. I now know that was because my brain lacked the memory and processing power.

What I was really wanting was the digital simulation now being developed at the Blue Brain Project, which is using the entire scientific literature as it pertains to neuroscience and very sophisticated in-house experiments in an attempt to assemble sparse data (it is impossible to know everything about the brain, so the researchers look for the minimally-necessary data to design algorithms) into digital simulations. These are then tested iteratively for conformance to observed biological functions.

All of which is preface to telling you about a Massive Open Online Course (MOOC – something which is new to me) and very worthwhile. Available for free online is “Simulation Neuroscience.” I have watched the first few hours of videos, describing the general approach and explaining the basic biology. I am hoping that when it gets around to the data, programming and computing (it requires supercomputers) aspects that I will still be able to follow. For the moment, at least, I am very excited to have renewed what was once intense intellectual curiosity which was left unfulfilled in the past for want of technology required to understand it. For those interested, I recommend checking it out.

Like 11+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Soyuz MS-10 Launch Failure

At 08:40 UTC on 2018-10-11, Soyuz MS-10 launched toward the International Space Station with a crew of two on board: Commander Aleksey Ovchinin of the Russian Space Agency and Flight Engineer Nick Hague of NASA.

Shortly after the separation of the four first stage boosters, around two minutes into the flight, Russian mission control began to report “failure”.  The animation shown on NASA TV continued to show a nominal mission.  There were several additional reports of failure, including the time.

Shortly thereafter, Ovchinin reported a ballistic re-entry had been selected, and then that they were weightless.  Then, he reported G forces building to 6.5 (consistent with a steep ballistic re-entry), and then declining to something over two [I think 2.5 or 2.7, but I do not have a recording], which would indicate having passed through the peak of re-entry braking.

There have been no reports from the crew since then.  Russian mission control reports that recovery helicopters have been dispatched to the predicted landing zone, and are expected to take around 90 minutes to arrive.  The launch was on a northeast azimuth, so landing would be  expected to be in northern Russia.

After a long delay (presumably because the descent capsule had passed over the horizon from the tracking stations), rescue forces reported that they had contacted the crew by radio.  The crew reported that they had landed and were in good condition.

I will add updates in the comments as events unfold.

Like 14+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Bears or deer?

For the fifth time in as many weeks, a night creature has knocked down the fencing surrounding a portion of my side yard. I have this area fenced off for our doggies. Granted it’s not a very sturdy fence, the doggies weigh less than 20 lbs, (9.07 Kilograms), and their mass would not be enough to bring the fence down.

I have a bet with my wife as to if it’s a deer or a bear. I’m saying a bear. But how to prove it? I moved one camera from my security system to watch the back yard. It’s motion activated and well, we’ll see.

View from the camera while the doggies were out last night at 2239, or 10:39 PM. The spotlights were on, we typically dim them slightly overnight. The fence extends from the gate on the right to a corner at the tree line and then to left to intersect with a more solid chain link fence at my neighbor’s shed in the upper left.

In the mean time, I put in a call to the game commission, in hopes they would bring a humane bear trap to my area to get the perpetrator. They asked if I have a bird feeder or trash cans in the area that would be an interest to a bear. I said no bird feeder, the trash cans are at the other end of my property  and liberally dosed with ammonia to keep bears away from them. I’m expecting a call back from them to further explain my plight today.

So what will it be, a bear or a deer?


Users who have liked this post:

  • avatar

Saturday Night Science: Life After Google

“Life after Google” by George GilderIn his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.

Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it’s wise to pay attention. He’s not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn’t want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.

The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.

This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.

As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.

The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers’ sites but did not publish themselves).

While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.

All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody’s data and resell their computing power and access at attractive prices.

As a example of the magnitude of the economies of scale we’re talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon’s making money on the deal.

This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.

As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we’re all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.

The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user’s behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.

The fundamental flaw in Karl Marx’s economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author’s view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it’s easier to understand the politics of the barons of Silicon Valley.

The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I’m not going there), but it’s profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I’ll discuss that later.) As with physical damage, top-down censorship does not work because there’s no top.

But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators’ revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn’t it great!’ ” Today there’s a top, and those on top control the speech of everything that flows through their data silos.

This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users’ most precious possessions: their time and attention) were in large part the consequence of the Internet’s lacking three fundamental architectural layers: security, trust, and transactions. Let’s explore them.

Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today’s Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it’s bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users’ security.

Trust. As indispensable as security is knowing to whom you’re talking. For example, when you connect to your bank’s Web site, how do you know you’re actually talking to their server and not some criminal whose computer has spoofed your computer’s domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it’s vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.

Transactions. Business is all about transactions; if you aren’t doing transactions, you aren’t in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user’s “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they’d have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.

So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users’ personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we’ve put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.

At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.

Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:

As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you’ll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they’ll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:

This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it’s way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn’t always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.

There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that’s so it simply means we’ll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.

He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don’t get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn’t much difference between a very slowly increasing supply and a static one. If you’re on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn’t matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.

Gilder further argues, as he did in The Scandal of Money, that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I’ll take his word on the latter point; it’s his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.

Gilder’s books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn’t make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.

Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.

Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.

This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.

Like 12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

This Week’s Book Review – City Unseen

I write a weekly book review for the Daily News of Galveston County. (It is not the biggest daily newspaper in Texas, but it is the oldest.) My review normally appears Wednesdays. When it appears, I post the review here on the following Sunday.

Book Review

‘City Unseen’ shows world in new light

By MARK LARDAS

Sep 25, 2018

”City Unseen: New Visions of an Urban Planet,” by Karen C. Seto and Meredith Reba, Yale University Press, 2018, 268 pages, $35

Readers might remember the claim that the Great Wall of China is the only man-made object visible from the moon. It is not true.

“City Unseen: New Visions of an Urban Planet,” by Karen C. Seto and Meredith Reba reveals the cities of Earth as seen from space. There is plenty to see.

The book contains images of cities captured from Earth-observation satellites, primarily captured by Landsat and ASTER. The book presents images from 100 different cities, on every continent (including Antarctica), over a 40-year-plus period.

The authors open discussing the images. They explain how the images were made, the scale of images, and the electromagnetic spectrum captured by the image. This ranges from visible to far infrared. They also explain colors and their significance. In some infrared images, vegetation shows up bright red. Depending on the wavelength, built-up urban areas will be pink, turquoise, or blue.

From there they go on to present the 100 cities featured in the book. These are broken into three broad categories: Earth’s terrains (mountain, river, agricultural), urban imprints (featuring borders, man-made travel routes, and planned cities), and transforming the planet (showing resources, expansion, and vulnerability).

Sometimes multiple images of cities are shown. This might be done to show the effects of seasons on Montreal, Quebec. Or they show growth over time. There are stunning images of Lagos, Nigeria; Tokyo, Japan; Shenzheng, China, and Las Vegas showing these cities growth over a period of decades. Perhaps the most fascinating multiple imaging was that of Joplin, Missouri, showing it before it was hit by a massive tornado, immediately afterward, and four years later, after recovery.

The book has many delights and surprises. There is an image of the Korean peninsula at night, starkly contrasting the access to electricity of north and south. Houston’s road system is spectacularly displayed. Circular irrigation effects are prominent in an image of Garden City, Kansas.

“City Unseen” is a delightful book. It offers a different view of the world on which we live, from pole to equator. Read it, and you will view the world in a new light.

 Mark Lardas, an engineer, freelance writer, amateur historian, and model-maker, lives in League City. His website is marklardas.com.


Users who have liked this post:

  • avatar
  • avatar

Greener

The Earth is getting significantly greener because there’s more carbon dioxide in the air.

From a quarter to half of Earth’s vegetated lands has shown significant greening over the last 35 years largely due to rising levels of atmospheric carbon dioxide.

The greening represents an increase in leaves on plants and trees equivalent in area to two times the continental United States.

Of course, we can’t have a parade without someone wanting to rain on it:

While rising carbon dioxide concentrations in the air can be beneficial for plants, it is also the chief culprit of climate change.

This greening was probably not accounted for in the GCMs (Global Circulation Models, i.e., climate models). At least, I’ve never seen mention of this in model descriptions. While these models include feedback mechanisms that magnify* the effects of more CO2 in the atmosphere, greening mitigates the effect by fixing atmospheric carbon in biomass. Patrick Moore has been saying this for ages. Maybe people will listen now that the evidence is overwhelming. This effect can go some way in explaining why the GCMs have systematically over-predicted the rise in temperature. Doomsday averted again!


*If the effect of added CO2 were not enhanced by a positive feedback effect, much less warming would be predicted. The principal greenhouse gas has always been water vapor. Warmer temperatures put more water into the air, thereby magnifying the effect of added CO2 – allegedly. The flies in the ointment are that more water also makes more clouds and more CO2 makes more plants.


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

AI Can’t Drive

The previous discussion about AI put me in mind of the accident in which a self-driving car hit and killed a pedestrian last March. This was of particular interest to me because, at that time, I was evaluating some lidar technology for this application for some investors.

Uber vehicle post-collision (from NTSB report)

Details were sketchy shortly after the accident. It was clear that the overall system failed but it was unclear which part. It was hard to believe that the sensors were not able to detect the presence of the pedestrian in time even though the street was dark. After a few months it became evident that the AI was at fault, not the sensors.

According to the preliminary NTSB report,

The vehicle was factory equipped with several advanced driver assistance functions by Volvo Cars, the original manufacturer. The systems included a collision avoidance function with automatic emergency braking, known as City Safety, as well as functions for detecting driver alertness and road sign information. All these Volvo functions are disabled when the test vehicle is operated in computer control but are operational when the vehicle is operated in manual control.

Therefore, safety was in the hands of Uber’s AI system. While the sensors detected the pedestrian’s presence six seconds before impact, while the vehicle was traveling at 43 mph (20 m/s), there was almost no attempt to slow the car; impact was at 39 mph. Instead, the system only decided that there was cause to apply the brakes at 1.3 sec before impact, which would not have been enough time to stop. The human operator, who had previously been watching TV instead of the road, finally took action less than one second before impact.

Even though “…the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision,” it turns out that “…emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior.” This somewhat hysterically-titled article in The Guardian (I know, I know) fleshes out some more details. Ignoring the hysteria and other silly aspects in the Guardian piece, it seems that the algorithm was unable to distinguish a valid hazard from something spurious. Or, as we say in remote sensing, to distinguish signal from clutter. Any human would have had no trouble determining that this was a person pushing a bicycle, or at least something worthy of a panic stop.

The AI system failed to solve the classification problem. Classification is key because it requires a judgment: deciding which detections are worthy of action. If the judgment errs too often on the side of caution, the ride is jerky with many sudden changes in the motion. If it goes the other way, someone gets killed.

Opinions can reasonably differ on whether computer-generated voices are realistic. The situation is less ambiguous for self-driving cars. While computers have managed to excel in games with rules on a well-defined domain (chess, go), the real world is far more varied and unpredictable. Humans in the wild do not obey all the rules and often do unexpected things. The accident victim was crossing the street illegally, away from an intersection, and may have been under the influence of drugs. AI guys, this problem is harder than you think.

Like 12+

Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar

Scam caller saga continues…

Well  the saga continues, I posted, (see BLUE text ), on several “who called me” sites because another scammer got through NoNoRobo. Why did they get through? Well this scammer, like many others, fakes the caller ID to that of a somewhat local number.  But the number they said to call was from Florida. So NoMoRobo did not recognize the (local) number as a scammer or robo-call (telemonster).

The message on my recorder said; “This message is intended for Jolene. I’m calling in regards to a pending matter that is being in the process of being reviewed today. I’m also calling to verify that we do have the correct address on file for this individual. To avoid any further proceedings at this time you have the right to contact the information Center. Should you wish to contact them the contact number listed as 561-223-6950 and you will have to reference your file number 16112.”

The message I posted on several web sites was; “called left this number to call back, asked for someone by first name that I never heard of except in Dolly Parton song, “Jolene”, LOL. Took the number they called from and forwarded it to the number they gave, let them get a taste of their own medicine. If you have XFINITY, you can do this free of charge, forward scammer calls back to themselves. Hope it ties up their call center!”

I hope XFINITY customers that are plagued by these calls do the same.


Users who have liked this post:

  • avatar
  • avatar
  • avatar

How times have changed…

Below you will see two examples of portable storage media. So what’s next? Or what’s next that I can afford? Or will I need it?

(The 64 Gig thumb drive was in my pocket, it accidentally took a swim in the washing machine and survived a wash and two rinse cycles, not to forget the three spin cycles!)


Users who have liked this post:

  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar
  • avatar