surf_board-legAnd so our two month retrospective comes to an end with this 17th and final chapter, circa 1996. I hope you have enjoyed it. Tomorrow I’ll be back to talk about the eBook version of this work as well as what I’ve been up to for the last eight weeks. It’s more than I ever expected… and less.

ACCIDENTAL EMPIRES

CHAPTER SEVENTEEN

DO THE WAVE

We’re floating now on surfboards 300 yards north of the big public pier in Santa Cruz, California. As our feet slowly become numb in the cold Pacific water, it’s good to ponder the fact that this section of coastline, only fifteen miles from Silicon Valley, is the home of the great white shark and has the highest incidence of shark attacks in the world. Knowing that we’re paddling not far from creatures that can bite man and surfboard cleanly in half gives me, at least, a heightened awareness of my surroundings.

We’re waiting for a wave, and can see line after line of them approaching from the general direction of Hawaii. The whole ritual of competitive surfing is choosing the wave, riding it while the judges watch, then paddling out to catch another wave. Choose the wrong wave—one that’s too big or too small—and you’ll either wipe out (fall off the board) or not be able to do enough tricks on that wave to impress the judges. Once you’ve chosen the wave, there is also the decision of how long to ride it, before heading back out to catch another. Success in competitive surfing comes from riding the biggest waves you can handle, working those waves to the max, but not riding the wave too far—because you get more total points riding a lot of short waves than by riding a few waves all the way to the beach.

Surfing is the perfect metaphor for high-technology business. If you can succeed as a competitive surfer, you can succeed in semiconductors, computers, or biotechnology. People who are astute and technically aware can see waves of technology leaving basic research labs at least a decade before they become commercially viable. There are always lots of technology waves to choose from, though it is not always clear right away which waves are going to be the big ones. Great ideas usually appear years—sometimes decades—before they can become commercial products. It takes that long both to bring the cost of a high-tech product down to where it’s affordable by the masses, and it can take even longer before those masses finally perceive a personal or business need for the product. Fortunately for those of us who plan to be the next Steve Jobs or Bill Gates, this means that coming up with the technical soul of our eventual empire is mainly a matter of looking down the food chain of basic research to see what’s likely to be the next overnight technosensation a few years from now. The technology is already there; we just have to find it.

Having chosen his or her wave, the high-tech surfer has to ride long enough to see if the wave is really a big one. This generally takes about three years. If it isn’t a big wave, if your company is three years old, your product has been on the market for a year, and sales aren’t growing like crazy, then you chose either the wrong wave or (by starting to ride the wave too early) the wrong time.

Software has been available on CD-ROM optical disks since the mid-1980s, for example, but that business has become profitable only recently as the number of installed CD-ROM drives has grown into the millions. So getting into the CD-ROM business in 1985 would have been getting on the wave too early.

Steve Jobs has been pouring money into NeXT Inc.—his follow-on to Apple Computer—since 1985 and still hasn’t turned a net profit. Steve finally turned NeXT from a hardware business into a software business in 1993 (following the advice in this book), selling NeXTStep, his object-oriented version of the Unix operating system. Unfortunately, Steve didn’t give up the hardware business until Canon, his bedazzled Japanese backer, had invested and lost $350 million in the venture. The only reason NeXT survives at all is that Canon is too embarrassed to write off its investment. So, while NeXT survives, Steve Jobs has clearly been riding this particular wave at least five years too long.

All this may need some further explanation if, as I suspect, Jobs attempts an initial public stock offering (IPO) for NeXT in 1996. Riding on the incredible success of its computer-animated feature film Toy Story, another Jobs company—Pixar Animation Studios—used an IPO to once again turn Steve into a billionaire. Ironically, Pixar is the company in which Jobs has had the least direct involvement. Pixar’s success and a 1996 feeding frenzy for IPOs in general suggest that Jobs will attempt to clean up NeXT’s balance sheet before taking the software company public. IPOs are emotional, rather than intellectual, investments, so they play well under the influence of Steve’s reality distortion field.

Now back to big business. There are some companies that intentionally stay on the same wave as long as they can because doing so is very profitable. IBM did this in mainframes, DEC did it in minicomputers, Wang did it in dedicated word processors. But look at what has happened to those companies. It is better to get off a wave too early (provided that you have another wave already in sight) than to ride it too long. If you make a mistake and ride a technology wave too far, then the best thing to do is to sell out to some bigger, slower, dumber company dazzled by your cash flow but unaware that you lack prospects for the future.

Surfing is best done on the front of the wave. That’s where the competition is least, the profit margins are highest, and the wave itself provides most of the energy propelling you and your company toward the beach. There are some companies, though, that have been very successful by waiting and watching to see if a wave is going to be big enough to be worth riding; then, by paddling furiously and spending billions of dollars, they somehow overtake the wave and ride it the rest of the way to the beach. This is how IBM entered the minicomputer, PC, and workstation markets. This is how the big Japanese electronics companies have entered nearly every market. These behemoths, believing that they can’t afford to take a risk on a small wave, prefer to buy their way in later. But this technique works only if the ride is a long one; in order to get their large investments back, these companies rely on long product cycles. Three years is about the shortest product cycle for making big bucks with this technique (which is why IBM has had trouble making a long-term success of its PC operation, where product cycles are less than eighteen months and getting shorter by the day).

Knowing when to move to the next big wave is by far the most important skill for long-term success in high technology; indeed, it’s even more important than choosing the right wave to ride in the first place.

Microsoft has been trying to invent a new style of surfing, characterized by moving on to the next wave but somehow taking the previous wave along with it. Other companies have tried and failed before at this sport (for example, IBM with its Office-Vision debacle, where PCs were rechristened “programmable terminals”). But Microsoft is not just another company. Bill Gates knows that his success is based on the de facto standard of MS-DOS. Microsoft Windows is an adjunct to DOS—it requires that DOS be present for it to work—so Bill used his DOS franchise to popularize a graphical user interface. He jumped to the Windows wave but took the DOS wave along with him. Now he is doing the same thing with network computing, multimedia computing, and even voice recognition, making all these parts adjuncts to Windows, which is itself an adjunct to DOS. Even Microsoft’s next-generation 32-bit operating system, Windows NT, carries along the Windows code and emulates MS-DOS.

Microsoft’s surfing strategy has a lot of advantages. By building on an installed base, each new version of the operating system automatically sells millions of upgrade copies to existing users and is profitable from its first days on the market. Microsoft’s applications also have market advantages stemming from this strategy, since they automatically work with all the new operating system features. This is Bill Gates’s genius, what has made him the richest person in America. But eventually even Bill will fail, when the load of carrying so many old waves of innovation along to the next one becomes just too much. Then another company will take over, offering newer technology.

But what wave are we riding right now? Not the wave you might expect. For the world of corporate computing, the transition from the personal computing wave to the next wave, called client-server computing, has already begun. It had to.

**********

The life cycles of companies often follow the life cycles of their customers, which is no surprise to makers of hair color or disposable diapers, but has only lately occurred to the ever grayer heads running companies like IBM. Most of IBM’s customers, the corporate computer folks who took delivery of all those mainframe and minicomputers over the past thirty years, are nearing the end of their own careers. And having spent a full generation learning the hard way how to make cantankerous, highly complex, corporate computer systems run smoothly, this crew-cut and pocket-protectored gang is taking that precious knowledge off to the tennis court with them, where it will soon be lost forever.

Marshall McLuhan said that we convert our obsolete technologies into art forms, but, trust me, nobody is going to make a hobby of collecting $1 million mainframe computers.

This loss of corporate computing wisdom, accelerated by early retirement programs that have become so popular in the leaner, meaner, downsized corporate world of the 1990s, is among the factors forcing on these same companies a complete changeover in the technology of computing. Since the surviving computer professionals are mainly from the 1980s—usually PC people who not only know very little about mainframes, but were typically banned from the mainframe computer room— there is often nobody in the company to accept the keys to that room who really knows what he or she is doing. So times and technologies are being forced to change, and even IBM has seen the light. That light is called client-server computing.

Like every other important computing technology, client-server has taken about twenty years to become an overnight sensation. Client-server is the link between old-fashioned centralized computing in organizations and the newfangled importance of desktop computers. In client-server computing, centralized computers called “servers” hold the data, while desktop computers called “clients” use that data to do real work. Both types of computers are connected by a network. And getting all these dissimilar types of equipment to work together looks to many informed investors like the next big business opportunity in high technology.

In the old days before client-server, the computing paradigm du jour was called “master-slave.” The computer—a mainframe or minicomputer—was the master, controlling dozens or even hundreds of slave terminals. The essence of this relationship was that the slaves could do nothing without the permission of the master; turn off that big computer and its many terminals were useless. Then came the PC revolution of the 1980s, in which computers got cheap enough to be bought from petty cash and so everybody got one.

In 1980, virtually all corporate computing power resided in the central computer room. By 1987, 95 percent of corporate computing power lay in desktop PCs, which were increasingly being linked together into local area networks (LANs) to share printers, data, and electronic mail. There was no going back following that kind of redistribution of power, but for all the newfound importance of the PC, the corporate data still lived in mainframes. PC users were generally operating on ad hoc data, copied from last quarter’s financial report or that morning’s Wall Street Journal

Client-server accepts that the real work of computing will be done in little desktop machines and, for the first time, attempts to get those machines together with real corporate data. Think of it as corporate farming of data. The move to PCs swung the pendulum (in this case, the percentage of available computing power) away from the mainframe and toward the desktop. LANs have helped the pendulum swing back toward the server; they make that fluidity possible. Sure, we are again dependent on centralized data, but that is not a disadvantage. (A century ago we grew our own food too, but do we feel oppressed or liberated by modern farming?) And unlike the old master-slave relationship, PC clients can do plenty of work without even being connected to the mainframe.

Although client-server computing is not without its problems (data security, for one thing, is much harder to maintain than for either mainframes or PCs), it allows users to do things they were never able to do before. The server (or servers—there can be dozens of them on a network, and clients can be connected to more than one server at a time) holds a single set of data that is accessible by the whole company. For the first time ever, everyone is using the same data, so they can all have the same information and believe the same lies. It’s this turning of data into information (pictures, charts, graphs), of course, that is often the whole point of using computers in business. And the amount of data we are talking about is enormous: far greater than any PC could hold, and vastly more than a PC application could sort in reasonable time (that’s why the client asked the server—a much more powerful computer—to sort the data and return only the requested information). American Express, for example, has 12 terabytes of data—that’s 12,000,000,000,000 bytes—on mainframes, mainframes that they must get rid of for financial reasons. So if a company, university, or government agency wants to keep all its data accessible in one place, they need a big server, which is still more often than not a mainframe computer. In the client-server computing business, these old but still useful mainframes are called “legacy systems,” because the client-server folks are more or less stuck with them, at least for now.

But mainframes, while good storehouses for centralized data, are not so good for displaying information. People are used to PCs and workstations with graphical users interfaces (GUIs) but mainframes don’t have the performance to run GUIs. It’s much more cost-effective to use microprocessors, to put part of the application on the desk so the interface is quick. That means client-server.

Client-server has been around for the last ten years, but right now users are changing over in phenomenal numbers. Once you’ve crossed the threshold, it’s a stampede. First it was financial services, then CAD (computer-aided design), now ordinary companies. It’s all a matter of managing intellectual inventory more effectively. The nonhierarchical management style—the flat organizational model—that is so popular in the 1990s needs more communication between parts of the company. It requires a corporate information base. You can’t do this with mainframes or minicomputers.

The appeal of client-server goes beyond simply massaging corporate data with PC applications. What really appeals to corporate computing honchos is the advent of new types of applications that could never be imagined before. These applications, called groupware, make collaborative work possible.

The lingua franca of client-server computing is called Structured Query Language (SQL), an early-1970s invention from IBM. Client applications running on PCs and workstations talk SQL over a network to back-end databases running on mainframes or on specialized servers. The back-end databases come from companies like IBM, Informix, Oracle, and Sybase, while the front-end applications running on PCs can be everything from programs custom-written for a single corporate customer to general productivity applications like Microsoft Excel or Lotus 1-2-3, spreadsheets to which have been added the ability to access SQL databases.

Middleware is yet another type of software that sits between the client applications and the back-end database. Typically the client program is asking for data using SQL calls that the back-end database doesn’t understand. Sometimes middleware makes the underlying mainframe database think it is talking not to a PC or workstation, but to a dumb computer terminal; the middle ware simply acts as a “screen scraper,” copying data off the emulated terminal screen and into the client application. Whatever it takes, it’s middleware’s job to translate between the two systems, preserving the illusion of open computing.

Open computing, also called standards-based computing, is at the heart of client-server. The idea is simple; customers ought to be able to buy computers, networking equipment, and software from whomever offers the best price or the best performance and all those products from all those companies ought to work together right out of the box. When a standard is somewhat open, other people want to participate in the fruits of it. There is a chance for synergy and economies of scale. That’s what’s happening now in client-server.

What has made America so successful is the development of infrastructure—efficient distribution systems for goods, money, and information. Client-server replicates that infrastructure. Companies that will do well are those that provide components, applications, and services, just as the gas stations, motels, and fast food operations did so well as the interstate highway system was built.

For all its promise, client-server is hard to do. Open systems often aren’t as open as their developers would like to think, and the people being asked to write specialized front-end applications inside corporate computing departments often have a long learning curve to climb. Typically, these are programmers who come from a mainframe background and they have terrible trouble designing good, high-performance graphical user interfaces. It can take a year or two to develop the skill set needed to do these kinds of applications, but once the knowledge is there, then they can bang out application after application in short time.

Companies that will do less well because of the migration toward client-server computing are traditional mid-range computer companies, traditional mainframe software companies, publishers of PC personal productivity applications, and, of course, IBM.

**********

If client-server is the present wave, the next wave is probably extending those same services out of the corporation to a larger audience. For this we need a big network, an Internet. There’s that word.

The so-called information superhighway has already gone from being an instrument of oedipal revenge to becoming the high-tech equivalent of the pet rock: everybody’s got to have it. Young Senator (later vice-president) Al Gore’s information superhighway finally overshadows Old Senator Gore’s (Al’s father’s) interstate highway system of the 1950s; and an idea that not long ago appealed only to Democratic policy wonks now forms the basis of an enormous nonpartisan movement. Republicans and Democrats, movie producers and educators, ad executives and former military contractors, everyone wants their own on-ramp to this digital highway that’s best exemplified today by the global Internet. Internet fever is sweeping the world and nobody seems to notice or care that the Internet we’re touting has some serious flaws. It’s like crossing a telephone company with a twelve-step group. Welcome to the future.

The bricks and mortar of the Internet aren’t bricks and mortar at all, but ideas. This is probably the first bit of public infrastructure anywhere that has no value, has no function at all, not even a true existence, unless everyone involved is in precise agreement on what they are talking about. Shut the Internet down and there isn’t rolling stock, rights-of-way, or railway stations to be disposed of, just electrons. That’s because the Internet is really a virtual network constructed out of borrowed pieces of telephone company.

Institutions like corporations and universities buy digital data services from various telephone companies, link those services together, and declare it to be an Internet. A packet of data (the basic Internet unit of 1200 to 2500 bits that includes source address, destination address, and the actual data) can travel clear around the world in less than a second on this network, but only if all the many network segments are in agreement about what constitutes a packet and what they are obligated to do with one when it appears on their segment.

The Internet is not hierarchical and it is not centrally managed. There is no Internet czar and hardly even an Internet janitor, just a quasi-official body called the Internet Engineering Task Force (IETF), which through consensus somehow comes up with the evolving technical definition of what “Internet” means. Anyone who wants to can attend an IETF meeting, and you can even participate over the Internet itself by digital video. The evolving IETF standards documents, called Requests for Comment (RFCs), are readable on big computers everywhere. There are also a few folks in Virginia charged with handing out to whomever asks for them the dwindling supply of Internet addresses. These address-givers aren’t even allowed to say no.

This lack of structure extends to the actual map of the Internet, showing how its several million host computers are connected to each other: such a map simply doesn’t exist and nobody even tries to draw one. Rather than being envisioned as a tree or a grid or a ring, the Internet topology is most often described as a cloud. Packets of data enter the cloud in one place, leave it in another, and for whatever voodoo takes place within the cloud to get from here to there no money at all is exchanged. This is no way to run a business.

Exactly. The Internet isn’t a business and was never intended to be one. Rather, it’s an academic experiment from the 1960s to which we are trying, so far without much success, to apply a business model. But that wasn’t enough to stop Netscape Communications, Inc., publishers of software to use data over the Internet’s World Wide Web, from having the most successful stock offering in Wall Street history. Thus, less than two years after Netscape opened for business, company founder Jim Clark became Silicon Valley’s most recent billionaire.

Today’s Internet evolved from the ARPAnet, which was yet another brainchild of Bob Taylor. As we already know, Taylor was responsible in the 1970s for much of the work at Xerox PARC. In his pre-PARC days, while working for the U.S. Department of Defense, he funded development of the ARPAnet so his researchers could stay in constant touch with one another. And in that Department of Defense spirit of $900 hammers and $2000 toilet seats, it was Taylor who declared that there would be no charges on the ARPAnet for bandwidth or anything else. This was absolutely the correct decision to have made for the sake of computer research, but it’s now a problem in the effort of turning the Internet into a business.

Bandwidth flowed like water and it still does. There is no incentive on the Internet to conserve bandwidth (the amount of network resources required to send data over the Internet). In fact, there is an actual disincentive to conserve, based on “pipe envy”: every nerd wants the biggest possible pipe connecting him or her to the Internet. (We call this “network boner syndrome”— you know, mine is bigger than yours.) And since the cost per megabit drops dramatically when you upgrade from a 56K leased line (56,000 bits-per-second) to a T-1 (1.544 megabits per second) to a T-3 (45 megabits per second), some organizations deliberately add services they don’t really need simply to ratchet up the boner scale and justify getting a bigger pipe.

The newest justification for that great big Internet pipe is called Mbone, the Multimedia Backbone protocol that makes it possible to send digital video and CD-quality audio signals over the Internet. A protocol is a set of rules approved by the IETF defining what various types of data look like on the Internet and how those data are to be handled between network segments. If you participate in an IETF meeting by video, you are using Mbone. So far Mbone is an experimental protocol available on certain network segments, but all that is about to change.

Some people think Mbone is the very future of the Internet, claiming that this new technology can turn the Internet into a competitor for telephone and cable TV services. This may be true in the long run, five or ten years hence, but for most Internet users today Mbone is such a bandwidth hog that it’s more nightmare than dream. That’s because the Internet and its fundamental protocol, called TCP/IP (transport control protocol/Internet protocol) operates like a telephone party line and Mbone doesn’t practice good phone etiquette.

Just like on an old telephone party line, the Internet has us all talking over the same wire and Mbone, which has to send both sound and video down the line, trying to keep voices and lips in sync all the while, is like that long-winded neighbor who hogs the line. Digital video and audio means a lot of data—enough so that a 1.544-megabit-per-second T-1 line can generally handle no more than four simultaneous Mbone sessions (audio-only Mbone sessions like Internet Talk Radio require a little less bandwidth). Think of it this way: T-1 lines are what generally connect major universities with thousands of users each to the Internet, but each Mbone session can take away 25 percent of the bandwidth available for the entire campus. Even the Internet backbone T-3 lines, which carry the data traffic for millions of computers, can handle just over 100 simultaneous real-time Mbone video sessions: hardly a replacement for the phone company.

Mbone video is so bandwidth-intensive that it won’t even fit in the capillaries of the Internet, the smaller modem links that connect millions of remote users to the digital world. And while these users can’t experience the benefits of Mbone, they share in the net cost, because Mbone is what’s known as a dense-mode Internet protocol. Dense mode means that the Internet assumes almost every node on the net is interested in receiving Mbone data and wants that data as quickly as possible. This is assured, according to RFC 1112, by spreading control information (as opposed to actual broadcast content) all over the net, even to nodes that don’t want an Mbone broadcast—or don’t even have the bandwidth to receive one. This is the equivalent of every user on the Internet receiving a call several times a minute asking if they’d like to receive Mbone data.

These bandwidth concerns will go away as the Internet grows and evolves, but in the short term, it’s going to be a problem. People say that the Internet is carrying multimedia today, but then dogs can walk on their hind legs.

There is another way in which the Internet is like a telephone party line: anyone can listen in. Despite the fact that the ARPAnet was developed originally to carry data between defense contractors, there was never any provision made for data security. There simply is no security built into the Internet. Data security, if it can be got at all, has to be added on top of the Internet by users and network administrators. Your mileage may vary.

The Internet is vulnerable in two primary ways. Internet host computers are vulnerable to invasion by unauthorized users or unwanted programs like viruses, and Internet data transmissions are vulnerable to eavesdropping.

Who can read that e-mail containing your company’s deepest secrets? Lots of people can. The current Internet addressing scheme for electronic mail specifies a user and a domain server, such as my address (bob@cringely.com). “Bob” is my user name and “cringely.com” is the name of the domain server that accepts messages on my behalf. (Having your own domain [like cringely.com] is considered very cool in the Internet world. At last, I’m a member of an elite!) A few years ago, before the Internet was capable of doing its own message routing, the addressing scheme required the listing of all network links between the user and the Internet backbone. I recall being amazed to see back then that an Internet e-mail message from Microsoft to Sun Microsystems at one point passed through a router at Apple Computer, where it could be easily read.

These sort of connections, though now hidden, still exist, and every Internet message or file that passes through an interim router is readable and recordable at that router. It’s very easy to write a program called a “sniffer” that records data from or to specific addresses or simply records user addresses and passwords as they go through the system. The writer of the sniffer program doesn’t even have to be anywhere near the router. A 1994 break-in to New York’s Panix Internet access system, for example, where hundreds of passwords were grabbed, was pulled off by a young hacker connected by phone from out of state.

The way to keep people from reading your Internet mail is to encrypt it, just like a spy would do. This means, of course, that you must also find a way for those who receive your messages to decode them, further complicating network life for everyone. The way to keep those wily hackers from invading Internet domain servers is by building what are called “firewalls”—programs that filter incoming packets, trying to reject those that seem to have evil intent. Either technique can be very effective, but neither is built in to the Internet. We’re on our own.

Still, there is much to be excited about the Internet. Many experts are excited about a new Internet programming language from Sun Microsystems called Java. What became the Java language was the invention of a Sun engineer named James Gosling. When Gosling came up with the idea, the language was called Oak, not Java, and it was aimed not at the Internet and the World Wide Web (WWW didn’t even exist in 1991) but at the consumer electronics market. Oak was at the heart of *7, a kind of universal intelligent remote control Sun invented but never manufactured. After that it was the heart of an operating system for digital television decoders for Time Warner. This project also never reached manufacturing. By mid-1994, the World Wide Web was big news and Oak became Java. The name change was driven solely by Sun’s inability to trademark the name Oak.

Java is what Sun calls an “architecture neutral” language. Think of it this way: If you found a television from thirty years ago and turned it on, you could use it today to watch TV. The picture might be in black and white instead of color, but you’d still be entertained. Television is backward-compatible and therefore architecture neutral. However, if you tried to run Windows 95 on a computer built thirty years ago, it simply wouldn’t work. Windows 95 is architecture specific.

Java language applications can execute on many different processors and operating system architectures without the need to rewrite the applications for those systems. Java also has a very sophisticated security model, which is a good thing for any Internet application to have. And it uses multiple program threads, which means you can run more than one task at a time, even on what would normally be single-tasking operating systems.

Most people see Java as simply a way to bring animation to the web, but it is much more than that. Java applets are little applications that are downloaded from a WWW server and run on the client workstation. At this point an applet usually means some simple animation like a clock with moving hands or an interactive map or diagram, but a Java applet can be much more sophisticated than that. Virtually any traditional PC application—like a word processor, spreadsheet, or database—can be written as one or more Java applets. This possibility alone threatens Microsoft’s dominance of the software market.

Sun Microsystems’ slogan is “the network is the computer,” and this is fully reflected in Java, which is very workstation-centric. By this I mean that most of the power lies in the client workstation, not in the server. Java applications can be run from any server that runs the World Wide Web’s protocol. This means that a PC or Macintosh is as effective a Java server as any powerful Unix box. And because whole screen images aren’t shipped across the network, applets can operate using low-bandwidth dial-up links.

The Internet community is very excited about Java, because it appears to add greater utility to an already existing resource, the World Wide Web. Since the bulk of available computing power lies out on the network, residing in workstations, that’s where Java applets run. Java servers can run on almost any platform, which is an amazing thing considering Sun is strictly in the business of building Unix boxes. Any other company might have tried to make Java run only on its own servers—that’s certainly what IBM or Apple would have done.

In a year or two, when there are lots of Java browsers and applets in circulation, we’ll see a transformation of how the Internet and the World Wide Web are used. Instead of just displaying text and graphical information for us, our web browsers will work with the applets to actually do something with that data. Companies will build mission-critical applications on the web that will have complete data security and will be completely portable and scalable. And most important of all, this is an area of computing that Microsoft does not dominate. As far as I can see, they don’t even really understand it yet, leaving room for plenty of other companies to innovate and be successful.

**********

What might be the wave after the Internet is building a real data network that extends multimedia services right into our homes. Since this is an area that is going to require very powerful processor chips, you’d think that Intel would be in the forefront, but it’s not. Has someone already missed the wave?

Intel rules the microprocessor business just as Microsoft rules the software business: by being very, very aggressive. Since the American courts have lately ruled in favor of the companies that clone Intel processors, the current strategy of Intel president Andy Grove is to outspend its opponents. Grove wants to speed up product development so that each new family of Intel processors appears before competitors have had a chance to clone the previous family. This is supposed to result in Intel’s building the very profitable leading-edge chips, leaving its competitors to slug it out in the market for commodity processors. Let AMD and Cyrix make all the 486 chips they want, as long as Intel is building all the Pentiums.

Andy Grove is using Intel’s large cash reserves (the company has more than $2.5 billion in cash and no debt) to violate Moore’s Law. Remember Moore’s law was divined in the late 1950s by Gordon Moore, now the chairman of Intel and Andy Grove’s boss. Moore’s Law states that the number of transistors that can be etched on a given piece of silicon will double every eighteen months. This means that microprocessor computing power will naturally double every eighteen months, too. Alternately, Moore’s Law can mean that the cost of buying the same level of computing power will be cut in half every eighteen months. This is why personal computer prices are continually being lowered.

Although technical development and natural competition have always fit nicely with the eighteen-month pace of Moore’s Law, Andy Grove knows that the only way to keep Intel ahead of the other companies is to force the pace. That’s why Intel’s P-6 processor appeared in mid-1995, more than a year before tradition dictated it ought to. And the P-7 will appear just two years after that, in 1997.

This accelerated pace is accomplished by running several development groups in parallel, which is incredibly expensive. But the same idea of spending, spending, spending on product development is what President Reagan used to force the end of communism by simply spending faster than his enemies could on new weapons. Any Grove figures that what worked for Reagan will also work for Intel.

But what if the computing market takes a sudden change of direction? If that happens, wouldn’t Intel still be ahead of its competitors, but ahead of them at running in the wrong direction? That’s exactly what I think is happening right now.

Intel’s strategy is based on the underlying idea that the computing we’ll do in the future is a lot like the computing we’ve done in the past. Intel sees computers as individual devices on desktops, with local data storage and generally used for stand-alone operation. The P-6 and P-7 computers will just be more powerful versions of the P-5 (Pentium) computers of today. This is not a bad bet on Intel’s part, but sometimes technology does jump in a different direction; and that’s just what is happening right now, with all this talk of a digital convergence of computing and communication.

The communications world is experiencing an explosion of bandwidth. Fiber-optic digital networks and new technologies like asynchronous transfer mode (ATM) networking is leading us toward a future where we’ll mix voice, data, video, and music, all on the same lines that today deliver either analog voice or video signals. The world’s telephone companies are getting ready to offer us any type of digital signal we want in our homes and businesses. They want to deliver to us everything from movies to real-time stock market information, all through a box that in America is being called a “set-top device.”

What’s inside this set-top device? Well, there is a microprocessor to decode and decompress the digital data-stream, some memory, a graphics chip to drive a high-resolution color display, and system software. Sounds a lot like a personal computer, doesn’t it? It sure sounds that way to Microsoft, which is spending millions to make sure that these set-top devices run Windows software.

With telephone and cable television companies planning to roll out these new services over the next two to three years, Intel has not persuaded a single maker of set-top devices to use Intel processors. Instead, Apple, General Instrument, IBM, Scientific Atlanta, Sony, and many other manufacturers have settled on Motorola’s PowerPC processor family. And for good reason, because a PowerPC 602 processor costs only $20 each in large quantities, compared with more than $80 for a Pentium.

Intel has been concentrating so hard on the performance side of Moore’s Law that the company has lost sight of the cost side. The new market for set-top devices—at least 1 billion set-top devices in the next decade—demands good performance and low cost.

But what does that have to do with personal computing? Plenty. That PowerPC 602 yields a set-top device that has the graphics performance equivalent to a Silicon Graphics Indigo workstation, yet will cost users only $200. Will people want to sit at their computer when they can find more computing power (and more network services) available on their TV?

The only way that a new software or hardware architecture can take over the desktop is when that desktop is undergoing rapid expansion. It happened that way after 1981, when 500,000 CP/M computers were replaced over a couple of years by more than 5 million MS-DOS computers. The market grew by an order of magnitude and all those new machines used new technology. But these days we’re mainly replacing old machines, rather than expanding our user base, and most of the time we’re using our new Pentium hardware to emulate 8086s at faster and faster speeds. That’s not a prescription for revolution.

It’s going to take another market expansion to drive a new software architecture, and since nearly every desk that can support a PC already has one sitting there, the expansion is going to have to happen where PCs aren’t. The market expansion that I think is going to take place will be outside the office, to the millions of workers who don’t have desks, and in the home, where computing has never really found a comfortable place. We’re talking about something that’s a cross between a television and a PC—the set-top box.

Still, a set-top device is not a computer, because it has no local storage and no software applications, right? Wrong. The set-top device will be connected to a network, and at the other end of that network will be companies that will be just as happy to download spreadsheet code as they are to send you a copy of Gone with the Wind. With faster network performance, a local hard disk is unnecessary. There goes the PC business. Also unnecessary is owning your own software, since it is probably cheaper to rent software that is already on the server. There goes the software business, too. But by converting, say, half the television watchers in the world into computer users, there will be 1 billion new users demanding software that runs on the new boxes. Here come whole new computer hardware and software industries—at least if Larry Ellison gets his way.

Ellison is the “other” PC billionaire, the founder of Oracle Systems, a maker of database software. The legendary Silcion Valley rake who once told me he wanted to be married “up to five days per week,” has other conquests in mind. He wants to defeat Bill Gates.

“I think personal computers are ridiculous,” said Ellison. “It’s crazy for me to have this box on my desk into which I pour bits that I’ve brought home from the store in a cardboard box. I have to install the software, make it work, and back up my data if I want to save it from the inevitable hard disk crash. All this is stupid: it should be done for me.

“Why should I have to go to a store to buy software?” Ellison continued. “In a cardboard box is a stupid way to buy software. It’s a box of bits and not only that, they are old bits. The software you buy at a store is hardly ever the latest release.

“Here’s what I want,” said Ellison. “I want a $500 device that sits on my desk. It has a display and memory but no hard or floppy disk drives. On the back it has just two ports—one for power and the other to connect to the network. When that network connection is made, the latest version of the operating system is automatically downloaded. My files are stored on a server somewhere and they are backed up every night by people paid to do just that. The data I get from the network is the latest, too, and I pay for it all through my phone bill because that’s what the computer really is—an extension of my telephone. I can use it for computing, communicating, and entertainment. That’s the personal computer I want and I want it now!”

Larry Ellison has a point. Personal computers probably are a transitional technology that will be replaced soon by servers and networks. Here we are wiring the world for Internet connections and yet we somehow expect to keep using our hard disk drives. Moving to the next standard of networking is what it will take to extend computing to the majority of citizens. Using a personal computer has to be made a lot easier if my mom is going to use a computer. The big question is when it all happens? How soon is soon? Well, the personal computer is already twenty years old, but my guess is it will look very different in another ten years.

Oracle, Ellison’s company, wants to provide the software that links all those diskless PCs into the global network. He thinks Microsoft is so concentrated on the traditional stand-alone PC that Oracle can snatch ownership of the desktop software standard when this changeover takes place. It just might succeed.

**********

If there’s a good guy in the history of the personal computer, it must be Steve Wozniak, inventor of the Apple I and Apple II. Prankster and dial-a-jokester, Woz was also the inventor of a pirated version of the VisiCalc spreadsheet called VisiCrook that not only defeated VisiCalc’s copy protection scheme but ran five times faster than the original because of some bugs that he fixed along the way. Steve Wozniak is unique, and his vision of the future of personal computing is unique, too, and important. Wozniak is no longer in the computer industry. His work is now teaching computer skills to fifth- and sixth-grade students in the public schools of Los Gatos, California, where he lives. Woz teaches the classes and he funds the classes he teaches. Each student gets an Apple PowerBook 540C notebook computer, a printer, and an account on America Online, all paid for by the Apple cofounder. Right now, Woz estimates he has 100 students using his computers.

“If I can get them using the computer and show them there is more that they can do than just play games, that’s all I want,’ said Wozniak. “Each year I find one or two kids who get it instantly and want to learn more and more about computers. Those are the kids like me and if I can help one of them to change the world, all my effort will have been worthwhile.”

As a man who is now more a teacher than an engineer, Woz’s view of the future takes the peculiar perspective of the computer educator trying to function in the modern world. Woz’s concern is with Moore’s Law, the very engine of the PC industry that has driven prices continually down and sales continually up. Woz, for one, can’t wait for Moore’s Law to be repealed.

Huh?

For the last thirty years and probably for another decade, Moore’s Law will continue to apply. But while the rest of the computing world waits worriedly for that moment when the lines etched on silicon wafers get so thin that they are equal to the wavelength of the light that traces them—the technical dead end for photolithography—Steve Wozniak looks forward to it. “I can’t wait,” he said, “because that’s when software tools can finally start to mature.”

While the rest of us fear that the end of Moore’s Law means the end of progress in computer design, Wozniak thinks it means the true coming of age for personal computers—a time to be celebrated. “In American schools today a textbook lasts ten years and a desk lasts twenty years, but a personal computer is obsolete when it is three years old,” he said. “That’s why schools can’t afford computers for every child. And every time the computer changes, the software changes, too. That’s crazy.

“If each personal computer could be used for twenty years, then the schools could have one PC for each kid. But that won’t happen until Moore’s Law is played out. Then the hardware architectures can stabilize and the software can as well. That’s when personal computers will become really useful, because they will have to be tougher. They’ll become appliances, which is what they should always have been.”

To Woz, the personal computer of twenty years from now will be like haiku: there won’t be any need to change the form, yet artists will still find plenty of room for expression within that form.

**********

We overestimate change in the short term by supposing that dominant software architectures are going to change practically overnight, without an accompanying change in the installed hardware base. But we also underestimate change by not anticipating new uses for computers that will probably drive us overnight into a new type of hardware. It’s the texture of the change that we can’t anticipate. So when we finally get a PC in every home, it’s more likely to be as a cellular phone with sophisticated computing ability thrown in almost as an afterthought, or it will be an ancillary function to a 64-bit Nintendo machine, because people need to communicate and be entertained, but they don’t really need to compute.

Computing is a transitional technology. We don’t compute to compute, we compute to design airplane wings, simulate oil fields, and calculate our taxes. We compute to plan businesses and then to understand why they failed. All these things, while parading as computing tasks, are really experiences. We can have enough power, but we can never have enough experience, which is why computing is beginning a transition from being a method of data processing to being a method of communication.

People care about people. We watch version after version of the same seven stories on television simply for that reason. More than 80 percent of our brains are devoted to processing visual information, because that’s how we most directly perceive the world around us. In time, all this will be mirrored in new computing technologies. We’re heading on a journey that will result, by the middle of the next decade, in there being no more phones or televisions or computers. Instead, there will be billions of devices that perform all three functions, and by doing so, will tie us all together and into the whole body of human knowledge. It’s the next big wave, a veritable tsunami. Surf’s up!