“The next big thing ain’t computers,” said Oracle Chief Executive Officer Larry Ellison in a Wall Street Journal interview recently. Is he right? If so, what is the next Big Thing and when can we expect to see it?
Ellison’s statement begs a huge number of other interesting questions. Is biotech, his proposed “next big thing” indeed the right candidate? Were computers really the last Big Thing? Is it useful economically for businessmen and investors to look for Big Things at all? — or is it simply an enjoyable spectator sport, like horse racing, an innocent and enjoyable pursuit provided one doesn’t gamble real money on the result. How do you define a Big Thing anyway?
To start with the last question first, it seems reasonable to define a Big Thing as a new (or greatly developed) technology or business sector that grows with extreme rapidity, dominating media business coverage and generating spectacular profits for those investors and businessmen who got in on the “ground floor.”
Computers in the 1950s weren’t a Big Thing because nobody made much money on them (although they were of course an enormously exciting new technology.) Instant photography (Polaroid) was a much Bigger Thing in that decade. In the 1960s computers became a Big Thing, as did plain paper copying (Xerox). Computers were again a Big Thing in the late 1970s and early 1980s, with the invention of the PC and its associated software.
Casino gambling was a Big Thing in 1978 as an otherwise-moribund stock market saw huge gains in casino stocks with the opening of the Atlantic City resorts.
Biotech in the 1980s wasn’t a Big Thing, it was simply a mechanism to separate Initial Public Offering investors from their money. By the time of the second biotech boom, in the mid-1990s, it had become a Big Thing.
The Internet was an enormously Big Thing in the late 1990s, as it produced an entirely new capability for us all and made huge fortunes for many of its investors. In terms of actual business, of course, it was only medium-sized. However, it spun off associated Big Things in software and of course telecommunications, where the interaction of mobile telephony and the Internet is still playing itself out, albeit at a slower pace and less profitably than seemed likely 3 years ago.
OK, next question. Is Ellison right in saying that computers themselves or, more broadly (which I presume to be what he means) computer-related businesses are not going to provide a Big Thing again in the foreseeable future?
On the hardware side, I think he is. We have about three more iterations of Moore’s Law (by which computing capacity doubles every 18-24 months) to move through at the frontier of technology before we start running up against a very tricky barrier — the computer components get down to molecular size.
There has of course been a huge amount of hype about “quantum computing” by which sub-molecular quantum effects can be used to reduce component sizes further. As a banker not a physicist by training, you will forgive me for being highly skeptical. Quantum mechanics remains a rather poorly understood science, resting on theoretical foundations that are quite shaky in parts. It thus seems likely to me that once we approach the molecular level, we will find that entirely new principles need to be used to move further, which in turn will involve an entirely new logical structure for the computer, quite different from the von Neumann logical architecture we have been using since the 1940s. Moving in such a radically new direction may be possible, but even if possible it will be extremely expensive at first, require theoretical knowledge we do not currently possess, and thus bring Moore’s Law to a grinding pause if not halt somewhere this side of 2010.
On the software side, the constraint is somewhat different; it is that business software has grown more complex far more quickly than it has grown more capable. A word processing or spreadsheet program we use today is functionally very similar indeed to that we used in 1995, and has strong family resemblances to that we used in 1983 (although the 1983 DOS interface would require some re-learning if we were transported back that far.) In database management and business process enhancement, too, the complexity of the solution, and the cost of installing the system, have multiplied far more quickly than the benefits the system is able to generate. This is of course the business in which Ellison’s company Oracle is engaged, which is doubtless why he sees the business as a mature one.
On the consumer side, the picture is very different. I remember playing one of the first role-playing computer games, called “Adventure” in 1983. It was entirely verbal in nature, with no graphics at all, and the key to success was to type in “Kill troll with ax” — more or less whatever the situation. Since then, we have come a long way. Today, a huge percentage of the week of almost any U.S. male under 25 is spent playing computer games of one sort or another, which would not be the case if typing “Kill troll” was all the thrill you got.
Nevertheless, we have considerably further to go. A modern 250K DSL link allows video images to be downloaded in real time, but only of a poor quality (or on a 2-inch screen.) I understand that the next generation of links, at 1.5 megabytes or so, will allow for full-screen download of TV-quality pictures.
The money will be made, however, not in providing the links, which will be a highly commoditized business requiring huge capital investment, but in the software. Currently, most games for Sony’s PlayStation 2 and Microsoft’s X-box are little beyond “Kill troll” in their sophistication — maybe the troll emits a realistic gurgling sound when you kill it, but that’s about all. This is not inevitable; and if manufacturers want to appeal to an audience beyond the less cognitively skilled 10-year-olds, it will have to change.
It seems inevitable that games’ lack of sophistication will change, and that changing it will be an enormously lucrative business. After all, for the first 30 years of the film industry, until 1925 or so, the artistic quality of the product was pretty abysmal. It may have been fashionable in the 1960s and 1970s to claim that Laurel and Hardy were great art, but look at them in the cold light of 2003 and you really doubt it. Once talking pictures came in, however, and the bugs had been ironed out, the 1930s saw the golden age of Hollywood, with films of an intellectual and emotional quality that have not been surpassed since, and that still move us today.
Given the enormous appeal of video games to a large section of the population (at least in terms of time spent playing them) I think it likely that, once video transmission of TV quality pictures has become widespread, a similar transition will take place in the video game business, with vastly more sophisticated and artistically challenging products being developed. It won’t matter if the economy is somewhat depressed (if it is) because these more sophisticated video games will be a relatively low priced means of escapism, and will be enjoyed accordingly. Like Hollywood in the 1930s therefore, the video game industry in the decade 2005-15 may indeed be a Big Thing, although it will be only peripherally related to the computer, and will have as its driving force content quality and sophistication rather than technological wizardry.
The problem with biotech, Ellison’s preferred next Big Thing, is that it’s tied up with legal difficulties in a way the information technology industry never has been, nor will be.
At the simplest level, of course, biotech is already producing a steady stream of marvelous drug therapies, and there is no reason to suppose that it won’t go on doing so. There will always be legal questions about the patent protection that such therapies enjoy, and pressure to bring their pricing down to more affordable levels, but the industry can doubtless deal with such difficulties, and continue to prosper. Nevertheless, at this level biotech is not really a Big Thing, simply a new and improved methodology for producing new drug therapies. Over time, it may change the world for the better, but in an evolutionary rather than revolutionary way.
The area of biotech which has the potential to be a Big Thing is that of the twin possibilities of cloning and genetic manipulation. Unquestionably, these technologies, if and when fully developed, have the ability to change the world in ways we can now only imagine, and make their developers immensely rich in the process. A Big Thing without question, and in my view one with enormous beneficial potential for mankind.
Unfortunately, very few people appear to agree with me, or if they do they are keeping quiet and allowing biotech’s opponents to make the running. It is likely that a total ban on cloning, at least for human reproduction and possibly even for therapeutic purposes, will be passed in the U.S. this year. An equivalent ban is already in effect in most countries of the EU, and will doubtless be made EU-wide shortly.
The objections of opponents are many-fold. At the simplest level, they object to the experimentation necessary to move forward in this area, pointing out correctly that such progress may result in the production of deformed people, or in the extinction of viable human embryos — while refusing to take account of the huge number of lives that can be saved and deformities removed by these techniques once developed.
At a more fundamental level, the objection appears to be a religious one, that it is wrong for mankind to play God. Here one cannot argue; since the objection is non-rational, there can be no way to overcome it rationally. Suffice it to say that I wholly disagree, and that in non-Christian societies in East Asia and elsewhere this prohibition may appear very much less salient than in the West.
Eventually, therefore, cloning and genetic manipulation are likely to take place. However, the huge political objections that have been raised in the U.S. and Europe are likely both to delay the necessary scientific advances and to ensure that the technology, when it is developed, is developed either outside the West or possibly even outside the world’s established legal system. In either case, the prospects of legitimate profits from this technique for U.S. or European investors are both limited and a long way in the future. If a Big Thing must make investors money, therefore, this is unlikely to be one, at least in the next couple of decades.
Finally, why do we care what the next Big Thing is?
To understand our future, to the extent we can, yes, in that respect it is interesting. But as a guideline for investment or choice of a business career, in the way it is used in the ever-optimistic U.S., the Big Thing is nothing short of a dangerous chimera. Human optimism being what it is, we always think ourselves especially capable of spotting the next Big Thing — as indeed we are confident of winning at the races or buying the winning lottery ticket. Because of this, charlatans peddling a plausible Big Thing can always get far more capital out of the investing public than can ever be usefully deployed in bringing a Big Thing to fruition.
In the U.S. at least in good economic times, far more capital is invested in the search for Big Things than is economically justified. The return on investment for money invested in the Big Thing search is thus substantially negative; in investing in Big Things, whether as a business or as a private investor, you are pouring your money down a rathole.
Surely the Internet bubble taught us that, at least!
-0-
(The Bear’s Lair is a weekly column that is intended to appear each Monday, an appropriately gloomy day of the week. Its rationale is that the proportion of “sell” recommendations put out by Wall Street houses remains far below that of “buy” recommendations. Accordingly, investors have an excess of positive information and very little negative information. The column thus takes the ursine view of life and the market, in the hope that it may be usefully different from what investors see elsewhere.)
This article originally appeared on United Press International.