We learned last week that Microsoft (Nasdaq:MSFT) is investing $10 billion in OpenAI for 75% of its profits, a stake that will drop to 49% when Microsoft has recouped its investment. Since OpenAI’s principal asset has become the artificial intelligence writing software ChatGPT, this indicates the perceived value of AI applications. Yet we have also learned that Amazon (Nasdaq:AMZN) is slashing staff on its Alexa suite of AI products, said to be losing $10 billion annually. Alexa’s downsizing, rather than ChatGPT’s perceived value, points out the flaw in today’s AI: if centralized dreams of control or “woke” social engineering are inserted into the software, the result may be artificial, but it is certainly not intelligent.
We have now had AI applications such as Alexa and Tesla’s self-driving software for about a decade, and the snags in the technology are beginning to become apparent. Alexa has been shown to provide its users with only minimal assistance in their daily lives, but to give Amazon an excessive amount of information on its owner’s movements and potentially conversations. Amazon has attempted to extend this ability further by its purchase of iRobot, the maker of Roomba robot vacuum cleaners; now in return for a rather inadequate cleaning of your apartment floor you can be assured that Amazon is fully cognizant of your non-cleaning activities.
The advantages of this arrangement for Amazon, the universal salesman of our lives, are obvious; its advantages to the individual consumer are far less so, unless the consumer is so woke they expect to live in a universe where Big Brother is continually on the lookout for unapproved activities and thoughts.
As for Tesla’s self-driving software, whether or not it reports back to Tesla (a less frightening entity than Amazon) on your activities is less of a concern, since we do not live in our cars. Its problem is a more basic one; in extreme circumstances the software does not work properly. After several driverless Tesla fatalities involving motorcyclists and pedestrians, the software seems to have been adjusted to take more account of indeterminate signals, with the inevitable result of over-correction seen when a self-driving Tesla engaged emergency brakes unexpectedly on a California freeway, causing an 8-car pile-up.
This illustrates one current problem with AI, which is not a permanent bar to it. Like all new technologies, the initial examples are imperfect. Thomas Newcomen’s steam engine had too jerky a motion to power rotating machinery and being so thermally inefficient was extremely expensive on coal. Yet it was still enormously useful in pumping water out of coal mines, or other mines where coal was readily available. Only a half century later, with James Watt’s engines, was the Newcomen engine’s thermal inefficiency ameliorated, and it was another generation before high-pressure engines had a sufficient power to weight ratio to power railways. AI will improve over time, but it may take a generation for many of its more complex applications to become feasible.
At the other extreme, one AI application that appears to work quite well is the Midjourney graphic arts software, that can create a picture based on a prompt in the style of a particular artist – effective enough that I used it for the cover of my upcoming book “Forging Modernity”. Like ChatGPT, it uses the entire Internet as a source of material, but in graphics this does not present a particular problem; the world’s great pictures are all contained somewhere on the Internet, without significant political discrimination between them, so given good enough software, it is possible to create an excellent pastiche. The result is a pastiche, not great art, but for most applications that is all that is needed – something decorative that mildly stirs the emotions and is evocative of an idea is easily good enough to satisfy all but the true art connoisseur.
ChatGPT itself has increasingly become disfigured by its “woke” programming, as has been widely featured in the media. It will be as rude as you like about President Trump, but is completely mealy mouthed about President Biden. That is a problem common among products of the tech sector; these fools think that by censoring sufficiently they can bully the populace into believing their “woke” nonsense. It is fairly easily rectified – ChatGPT’s artificial constraints can be circumvented by a sufficiently determined and capable user, and there can be no question that an open-source successor will shortly appear without obvious wokery.
There are however two problems with ChatGPT or any non-woke successor at the present technological level that appear more fundamental. First, like Midjourney, it uses the entire Internet as a source of material, but does not weight it. That leads to random blogs and internet discussions of the last decade overwhelming the classics of literature or history through their sheer numbers.
To test whether this was a problem, I asked ChatGPT to produce a defense of the 1815 Corn Law. That is not a subject on which wokies have strongly developed opinions. However, the information on the Internet includes almost no defense of the Corn Law, because the Whig historians writing after its 1846 Repeal have overwhelmingly opposed it. Add to that every student essay of the last two decades seeking to provide acceptable pap for their leftish professors, and you will see that, to ChatGPT, the evidence is overwhelmingly against the 1815 Corn Law and agricultural protection in general. Lord Liverpool’s speech of March 15, 1815, introducing the Corn Law and explaining its rationale in terms perfectly acceptable to modern ears, is available on the Internet, through the pages of Hansard, but it is swamped by modern and far less relevant and knowledgeable opposition. Hence ChatGPT cannot or will not produce an essay setting out the case for the Corn Law; it degenerates quickly into reiterating the case against it.
That illustrates a major problem with ChatGPT; it can produce only conventional opinions. I asked it to produce an essay against AI, following from the first paragraph of this piece, even going so far as to try having it pretend to be the late lamented General Francisco Franco, but it was unable to produce any but the most obvious and banal arguments, missing all the subtle second-order and third-order ones. ChatGPT’s prose is perfectly acceptable, but its opinions are dull in the extreme.
As I said, AI today represents artificial stupidity, and may well continue to do so. ChatGPT will be useful for school essays, but only for the duller students, who are content with a C. More usefully, perhaps, with its “woke” programming, it will be able to produce those ineffably tedious college application essays that no thoughtful and intelligent student can write well, but which if capably done cause a college admissions dean to roll over in ecstasy. Using ChatGPT or its successors, with deft programming, my grandchildren will be able to choose between Harvard, Yale and Princeton! As for journalism, ChatGPT will still further increase its quantity but not alas its quality, though it may be able to solve cases of writers’ block, if like Dr. Johnson, its journalist user writes only for the money.
However, there is a deeper question of how the ancien regime will be able to maintain its current structure given that many of its favorite activities for status signaling (art and writing) are being automated first, while despised blue-collar activities like truck driving remain resistant. It is inevitable that as recently attempted by the New York City school system, AI has a long series of pointless and counterproductive bans facing it. That and the attempted lobotomizations of the software when it gives the ‘wrong’ answer means the pace of progress may be harshly retarded. It is as if 18th Century politicians had discovered climate change and attempted to ban the steam engine.
There is only one consolation. In a world of “woke” AI, it will become easy to pass a Turing test; you will merely have to imagine yourself as Jeremy Clarkson and insert some joke that is utterly socially unacceptable. Alas, the Duke and Duchess of Sussex fail that test; they can prove themselves royal but can no longer prove themselves human!
Image: Midjourney AI ‘cybernetic dragon attacking woman, orthodox::3 icon::3 –ar 6:1 –version 3’
(The Bear’s Lair is a weekly column that is intended to appear each Monday, an appropriately gloomy day of the week. Its rationale is that the proportion of “sell” recommendations put out by Wall Street houses remains far below that of “buy” recommendations. Accordingly, investors have an excess of positive information and very little negative information. The column thus takes the ursine view of life and the market, in the hope that it may be usefully different from what investors see elsewhere.)