As promised, here are my technology predictions for 2011. These columns usually begin with a review of my predictions from the previous year because it annoys me that writers make predictions without consequences. If we are going to claim expertise then our feet should be held to the fire. But last January I didn’t write a predictions column, thinking we were past all that (silly me) so there is nothing with which to embarrass myself here. More sobering still, after last year’s holiday firestorm over our naked card Mrs. Cringely won’t let me post this year’s card. We have become so dull.

We also seem to have become verbose, because my first prediction (below) took 1400 words to write. So tell you what: this year I’ll do 10 predictions, but as separate posts. Here’s #1, which is of course you knew would be… bufferbloat!

Prediction one:  Bufferbloat will become a huge problem this year.

What, you’ve never heard of bufferbloat?

Sixteen years ago Ethernet inventor Bob Metcalfe made some calculations on a napkin at Sushi Sam’s in San Mateo and concluded that the Internet was set to shortly implode. He saw client traffic growing faster than backbone capacity and felt that unchecked this would lead to eventual gridlock on the World Wide Web. Obviously it didn’t happen, but that doesn’t mean it didn’t come close. Props to my old friend Bob for bringing the problem to everyone’s attention in enough time for it to be avoided by dramatically expanding backbone capacity, a trend that continues to this day.

The fact that backbone capacity increases have continued at a faster rate than Moore’s Law grows our ability to use that bandwidth means that the very gridlock feared by Metcalfe won’t (indeed can’t) happen anytime soon. It’s lost in our dust. But now we face a similar though far more insidious problem in bufferbloat, which is a big enough deal that it will be involved in three of my predictions for the coming year.

Bufferbloat is a term coined by Jim Gettys at Bell Labs for a systemic problem that’s appearing on the Internet involving the downloading of large files or data streams. In simple terms, our efforts to improve client performance for watching TV over the Internet is interfering with the Internet’s own ability to modulate the flow of data traffic to our devices.

Think this isn’t happening? How fast is your Internet connection today compared to, say, three years ago? Most users have 2-3 times the bandwidth today than they did three years ago. Does your connection feel 2-3 times faster? Of course it doesn’t, because it isn’t. And it isn’t because of bufferbloat.

In terms of latency, the Internet was faster 20 years ago than it is today — in many cases vastly faster. And it is getting slower every day. Unchecked, bufferbloat will eventually make the Internet unusable for some data-intensive activities.

Here’s how bufferbloat works. Your cable or DSL modem is connected to a data pipe that’s spewing bits. Internet traffic is bursty, so imagine that pipe flowing with what’s sometimes a trickle and a second later a flood of data that may then drop back to a trickle again. In order to make the best-possible user experience, the more intelligent your modem the more it will use techniques like pre-fetching and buffering. Memory is cheap, after all, so why not grab some extra data and hold it in a memory buffer in the modem just in case it is needed, at which point it can be delivered — BAM! — to the user? Why not indeed?

Few homes today have a modem connected directly to a PC. Multiple computing devices are the norm. At our House of Boys, for example, we have three home computers, three iPod Touches, two smart phones, a Roku box, a Wii, and an xBox 360, which means we have a network and a router, with the router adding a second buffering stage. The iPod Touches and smart phones are connected to the network solely by WiFi, adding yet another buffering stage. There are the devices themselves, which tend to buffer data in the OS for running apps like browsers and media players. And don’t forget the apps with their little buffer indicators.

So most of us have cascading buffers, which sound good in principle but actually screw-up the flow control algorithms in TCP/IP — the networking technology that runs the Internet.

What these buffers are intended to minimize are dropped data packets that require expensive (in terms of time) retransmissions under TCP (Transport Control Protocol) or cause a playback glitch in media apps that use the simpler UDP (Universal Datagram Protocol). UDP doesn’t ask for retransmissions and is used in most video and audio streaming services since it is faster.

So we’re trying to improve the user experience by minimizing dropped packets, yet TCP requires dropped packets to implement its flow control, which was developed in response to a huge NSFnet meltdown back in 1987. Retransmission requests from the client are the way, under TCP, that the server knows it is sending data just fast enough. If there are no dropped packets at all, the server sends data as fast as it can, which isn’t always good.

Here is the key point: if the amount of data held in buffers is more than the client can process in the expected Round-Trip Time (RTT) from the server and back, then TCP’s flow control simply stops working. The server runs like a bat out of Hell. This might be okay on a very lightly-loaded network or on a network with only two nodes (server and client) but our average Internet connection today is about 15 hops, meaning there are 13 other points of possible congestion between the server and the client. TCP flow control normally operates on all of those interim nodes, but not if bufferbloat circumvents flow control. The result is network congestion that happens at some interim node and, because TCP flow control isn’t working, we have no way of knowing which node is having the problem. So network latency drops from milliseconds to seconds (see the chart above), the connection eventually fails, the buffers are all drained, then your Netflix or Hulu client begs for patience while it tries to reestablish a connection that shouldn’t have failed in the first place.

That’s exactly what happened this Christmas when our new xBox 360 replaced the entry-level Roku box on our big-screen TV. Either device can be used to watch Netflix, so I disconnected the Roku, moving it to another TV in the house. But when Mrs. Cringely and I began watching our Lie to Me marathon one night on Netflix after the boys were asleep, it was clear that the old Roku did a much better job streaming video than the far more powerful xBox 360. The xBox kept stopping every 3-5 minutes to change the video quality as the network got slower or faster. It was bufferbloat. The el cheapo Roku with its tiny memory was a better Netflix streaming device than was the state-of-the-art gaming system.

I know, I know, here I am 1100 words into this mess and still on my first prediction. They’ll get a lot shorter from here on. But first I have to explain why bufferbloat is going to be a big problem in 2011. Some of it has to do with the rise of streaming video services, but most of it has to do with the retirement of PCs running Windows XP.

XP is a pretty archaic design compared to Linux or OS X or Windows 7. TCP/IP running under Windows XP can only buffer 64 kilobytes at a time which implies a data rate of about six megabits-per-second. So you can’t saturate most cable or DSL connections with a computer running Windows XP, which is good. If we all ran XP and nothing but XP, there probably wouldn’t be a bufferbloat problem at all. But you can’t wear bellbottoms forever, so we’ve moved-on and in the process created this huge mess for ourselves that we’ll all be talking about by the end of the year.

The coping strategy for bufferbloat, by the way, is to minimize the buffers on your home network. Where you can adjust buffer sizes, make them as small as possible (but don’t turn them off — a little buffering is good). Routers with editable firmware like OpenWRT are nice for this. Or consider getting a DSL or cable modem aimed at gamers, because they are already optimized for lower latency. And some vendor should really offer a DSL/cable modem-router with dual-band 802.11N WiFi so there’s only one device between the ISP and you. I’d buy one.