The Bear’s Lair: Tech sector can’t do quality control

This column recognizes that new products and services can have technical hiccups. Stephenson’s Rocket ran over a Cabinet Minister at its inaugural ceremony, killing him. The Hindenburg’s new transatlantic airship service burst into flames in Lakehurst, New Jersey. The first jet airliner, the Comet 1, kept blowing out its windows at high altitude. But the Internet is nearly 30 years old and Facebook has been around since 2004, yet they have the quality control of something in its infancy. Possibly this is because the entire industry is run by and for infants.

Old hands on the Internet agree completely that it has gone to hell but disagree as to when it happened. The younger and more optimistic of them date the transition at 2007, when the advent of smartphones (the iPhone series) pushed the less technically and intellectually capable of humanity into having an Internet capability, which they use on harassing their less popular “friends” and posting cute but vacuous cat videos. The “early adopters” among the general population, among which this column can be included, believe it went to hell around 1999, when Internet usage widened from 5% to 50% of the population, and passive viewing of sites became more common than active communication.

Finally, the real aristocracy of the Internet, who were coding on it and sending each other messages even before Tim Berners-Lee invented the World Wide Web in 1990, talk wistfully of an Eternal September, referring to the month of September 1993, when the first ignorant flood of America On-Line users joined up and ruined the high-level professional interactions that had previously been taking place. For them, September 1993 was eternal in the same sense as July 1789; it never ended and the floods of intellectual riff-raff kept on coming, destroying the social norms and douceur de vivre that the ancien regime had previously enjoyed.

We now have an Internet in which the software keeps breaking down – Facebook’s lengthy outage this week being just one of a pretty frequent class of problem – while the users are so infinitely varied that the professionals do not get what they want, while children and the intellectually challenged become addicted and have their lives permanently warped by software that in its operation is far beyond their comprehension, managed by sinister far-left techies who seek to further their addiction so they can monetize it, while imposing their half-witted social views on the population.

For most people, there is no way to reduce their dependence on the tech companies. Software is today sold as a service, not as a product, so that you are dependent on endless futile “updates” that make the software more sluggish and less functional. You cannot liberate yourself by using say “Windows 95” which worked perfectly well in its day, because Windows no longer supports that product – sorry, service – and no modern software will work properly with it.

Software as a service is naturally profitable for the tech companies; they get to sell endless pointless new versions of the same product, while charging annual fees to buyers. That keeps earnings well propped up even in companies that have not had a genuinely useful new idea since 2005 or so (or in some cases, 1986 or so.) It also infantilizes consumers; they are wholly dependent on the provider of their software, being able neither to modify it or to leave it.

Companies that in 1996 produced a functional, simple software product that all could use, and whose features were well adapted to consumer needs, today produce a buggy software “service” requiring endless passwords and “authentications” to access, and that quite often crashes, especially when one of the updates has been added and turns out not to work. The tech sector not only has poor quality control, it has incentives to downgrade quality even further, in terms of the value that an intelligent user can gain from its products.

The decline in quality is an inevitable effect of the rise in software’s complexity. As the number of lines of code in a program increases, the complexity increases quadratically, and putting more people on the job of ironing out the bugs only worsens the problem further. The solution of course would be to return to simpler software, but that would be far less profitable for the software companies.

We have in the past seen other examples of new products being introduced to a public utterly untrained in using them. Automobiles are an obvious example. In Britain, they were largely the playthings of the rich until the late 1920s, and the expectation was the rich would hire and train professional chauffeurs. (The Rolls Royce Silver Ghost (1907-26) owners’ manual said of tire failures, a frequent problem early on “Have your man pull the car over to the side of the road” – something that was clearly not obvious to the average Rolls-Royce owner.) In the United States, however the advent of the Model T in 1908 democratized car ownership – Model T owners could not afford chauffeurs. The U.S. fatality rate per hundred million vehicle miles peaked at 24.08 in 1921, the first year for which that statistic was collected; it is now one twentieth of that. The death toll per head of population peaked in 1937, the absolute death toll peaked as late as 1972. The moral of which is that users take a very long time indeed to learn how to use a new product or service properly.

There is not however in automobiles such a chasm of capability between the most and least sophisticated users as there is in software. The teenage tearaway matures into the family driver or gets a job as a professional driver of a truck or school bus; the gaps between people’s driving capability simply reflect the gaps between people’s capability at any other activity. In software however, the vast majority of users are passive; they are using it for light entertainment or for small-scale gathering of snippets of information.

For such people, a “walled garden” seems most appropriate. Think of them as consumers of 1950s television, with its limited number of channels and its vast social reach. The FCC laid down rules for 1950s television, to protect its viewers from pornography and subversion, while the big consumer goods companies, advertising on the medium, ensured that its content was offensive to as few people as possible.

Intellectuals scoffed, but such people could always go see an off-Broadway play or a comedy club or an art movie if they wished to be stimulated beyond what was on TV. Culture existed on several levels; leftist intellectuals and other sickos felt themselves superior by attending live performances of “Howl” or Lenny Bruce; normal well-adjusted people were more than satisfied with the TV channels’ fare of “Howdy Doody,” westerns and “I Love Lucy.” If we could provide an Internet that brought simple snippets of information, only the most bland “civic virtue” politics and lots of dog and cat videos, it would provide everything desired by at least half the current Internet users and would greatly reduce the level of Internet-related mental distress.

At the other end of the scale, the original users of the Internet regard it as a machine tool, which they can use to write better programs and construct new products and services that will satisfy some range of customers. They are like Henry Maudslay (1771-1831) seeing the new medium as a mechanism by which they can change the world, just as Maudslay saw the lathe and precision engineering as tools to create the second stage of the Industrial Revolution. For them, it is indeed eternally September 1993; they need to remove the casual users from the Internet and allow it to reach its full potential in the service of their dreams. Their tools will be open-source Linux programs and applications, many of which they will develop themselves.

That leaves the rest of us, those who joined the Internet after its initial era, learning as we used it, and by now deeply regretting the simple comprehensible software of the late 1990s and the reliability that the Internet seemed capable of providing as it grew. For us, the garden does not need to be walled, but we also have no interest in delving into the intricacies of writing new state-of-the-art software. We are customers of the Internet engineering aristocracy, but we wish to pursue our intellectually bourgeois interactions with the Internet as efficiently and pleasurably as possible, without interruptions from crashing software or worse still bad actors seeking to hold us to ransom. Our needs are in many ways the most complex, but there is a great deal of money to be made in satisfying them properly, without the crashes and kludges of our current Internet experience.

Overall, there should be not one Internet but three: First, there should be a simple “walled garden”, protected from harm, where simple folk and the vulnerable can gambol, by all means making money for duly curated advertisers, but otherwise protected from hostile propaganda and damage. A second Internet should be more interactive, with programs and applications that do their job, without endless layers of complexity from which we derive no benefit. Third, for a very small percentage of the population, there is the Internet as “machine tool” through which they can provide the necessary programs and services for the rest of us, but also expand the frontier of humanity’s store of information.

Only in that “machine tool” Internet should information be free. In the other areas, it should be carefully held in check.

(The Bear’s Lair is a weekly column that is intended to appear each Monday, an appropriately gloomy day of the week. Its rationale is that the proportion of “sell” recommendations put out by Wall Street houses remains far below that of “buy” recommendations. Accordingly, investors have an excess of positive information and very little negative information. The column thus takes the ursine view of life and the market, in the hope that it may be usefully different from what investors see elsewhere.)