mindWe’re in the middle of a huge platform shift in computing and most of us don’t even know it.  The transition is from desktop to mobile and is as real as earlier transitions from mainframes to minicomputers to personal computers to networked computers with graphical interfaces.  And like those previous transitions, this one doesn’t mean the old platforms are going away, just being diminished somewhat in significance.  All of those previous platforms still exists.  And desktops, too, will remain in some form when the mobile conversion is complete, though we are probably no more than five years from seeing the peak global population of desktop computers.  We’d be there right now if we’d just figured out the I/O problem of how to stash a big display in a tiny device.  But we’re almost there.  That’s what this column is largely about.

I’ve been thinking about this topic ever since I wrote a column on an iPhone.  It wasn’t easy to do, but I researched and wrote the column, loaded it to WordPress and added graphics, all by jabbing fingers at that tiny screen.  It was for me an important test of what was possible and confirmed to me what I’d been guessing — that the iPhone is the first real device for the new mobile platform.  Not a great device, but as Adam Osborne used to preach, it is an adequate device, and in the early days adequate is quite enough.

This seminal role for the iPhone is mainly by chance, I think.  Its success is deserved no more than it is undeserved.  The role could have fallen to Android or WebOS if they had been earlier or even to Windows Mobile if it had been a bit better.  Steve Jobs proved his luck again by dragging his feet just long enough to fall into the sweet spot for a whole new industry.  That’s not to say he can’t still blow it, but he has the advantage for now.

It’s important to understand just how quickly things are changing.  Part of this comes down to the hardware replacement cycle for these devices.  A PC generation is traditionally 18 months long and most of us are unwilling to be more than two generations behind, so we get a new desktop or notebook every 36 months.  Mobile devices don’t last that long, nor are they expected to.  The replacement cycle is 18 months, reinforced by customer contract terms that give us a new device every couple of years in return for staying a loyal customer.  Mobile hardware generations last nine months, and 18 tends to be the maximum time any of us use a single device.

Think about it.  This means that mobile devices are evolving twice as fast as desktops ever did.  This just about equals the rate at which wireless network bandwidth is declining in price and matches, too, the faster-than-Moore’s Law growth of back-end services.  Think about those first iPhones compared to the ones shipping today.  In less than two years the network has increased in speed by an easy 2X and the iPhone processor speed has doubled, leading to a device that is at least four times more powerful than it was originally.  It’s a much more capable device than it was, yet the price has only gone down and down.

This is not a celebration of the iPhone: the same performance effects apply equally to all mobile platforms.

Now just imagine what it says for the smart phones to come.  In another two years they’ll be eight times as powerful as they are today, making them the functional equivalents of today’s desktops and notebooks.  If only we could do something about those tiny screens and keyboards.

The keyboard is a tough one.  In one sense it isn’t hard to imagine it being handled through voice input.  That’s how they did it on Star Trek, right?  But there was a problem with Star Trek computing: the interface is what I think of as interrogational.  Kirk or Scotty asked the ship’s computer (a mainframe, obviously) a question that always had an answer that could be relayed in a handful of words.  The answer was “yes,” “no,” “Romulan Bird of Prey,” or “kiss your ass goodbye, Sulu.” There’s never any nuance with an interrogational interface and not much of a range of outputs.  It’s okay for running a starship or a nuclear power plant, but by being only able to speak it is limited to what words alone can do.

I attribute this, by the way, to Gene Roddenberry’s work as a writer.  I doubt that he saw word output as a limitation, since his product was, after all, words.  TV is radio with pictures, and the words really count a lot.  But try to use them to simulate a nuclear meltdown with any degree of precision or prediction and they’ll fail you.

Our future mobile devices will use words for input, sure, but words alone won’t be enough.  Still, between voice recognition, virtual keyboards, and cutting and pasting on those little screens, there’s a lot that can be done.  It’s the output that worries me more.

I first wrote about this a decade ago when I heard about how Sony was supporting research at the University of Washington on retinal scan displays — work that eventually resolved into products from a Washington State company called Microvision.  They’ll shine a laser into your eye today, painting a fabulous scene on the back of your eyeball in what appears to be perfect safety, but I have a hard time imagining the broad acceptance of such displays by billions (yes, BILLIONS) of users any more than I expect that Bluetooth earphones will survive a decade from now.  Too clunky.

I think we’re headed in another direction and that direction is — as always — an outgrowth of Moore’s Law.  Processors get smaller every year and as they get smaller they need less energy to run.  Modern processors are also adapting more asynchronous logic — another topic I started writing about 10 years ago that offers dramatic energy savings.

We’re at the point right now where primitive single-pixel displays can be built into contact lenses.  They act as user interfaces for experimental devices like automatic insulin pumps.  This already exists.  A patch of carbon nanotubes on your arm continuously monitor blood glucose levels, driving a pump that keeps your insulin supply right where it should be.  Any problem with the pump or the levels is shown by a red dot that appears in your field of view courtesy of that contact lens.  The data connection between pump and eyeball is wireless. The power to run that display is wireless too, since the contact lens display scavenges RF energy out of the air to run, courtesy of that mobile phone on your belt and that WiFi access point on the ceiling.

As long as we’re personally connected to the network we’ll have enough power to run such displays.  No more airplane mode.

And while that display is a single pixel today, we can pretty easily predict at what point it could be the equivalent of HDTV.  Except I don’t expect we’ll ever get there.  That’s because, thanks to Ray Kurzweil’s singularity — that point at which everyday machines have more computing cycles than I do — we’ll soon have so much excess processing power that mere physical interfaces will be boring and not necessary.

Here’s my problem with the singularity: I don’t want to work for my computer, much less for my microwave oven, both of which are supposed to be way smarter than me by 2029, according to Ray.  My way around this problem, in the Capt. Kirk tradition, is to find difficult jobs for all that computing power to keep it from interfering with my lifestyle.

So there’s a platform transition happening. We’re in the middle of it.  The new platform is a mobile interface to a cloud network.  And the way we’ll shortly communicate with our devices, I predict, will be through our thoughts.  By 2029 (and probably a lot sooner) we’ll think our input and see pictures in our heads.

Think it can’t happen?  Twenty years ago was Windows 3.0 and Mac OS 6. Twenty years from now computing won’t even be a device, just a service.