Twenty years ago, when I was writing Accidental Empires, my book about the PC industry, I included near the beginning a little rant about how good engineers were incapable of lying, because their work relied on Terminal A being positive and not negative and if they lied about such things then nothing would ever work. That was before I learned much about data security, where apparently lying is part of the game. Well, based on recent events at RSA, Lockheed Martin, and other places, I think lying should not be part of the game.

Was there a break-in? Was data stolen? Was there an unencrypted database of SecureID seeds and serial numbers? All we can say at best is that we don’t really know. And in some quarters that is supposed to make us feel more secure because it means the bad guys are equally clueless. Except they aren’t, because they broke-in, they stole data, they knew what the data was good for while we — including SecureID customers it seems — are still mainly in the dark.

A lot of this is marketing — a combination of “we are invincible” and “be afraid, be very afraid.” But a lot of it is intended also to keep us locked-in to certain technologies. To this point most data security systems have been proprietary and secret. If an algorithm appears in public it escaped, was stolen, or reverse-engineered. Why should such architectural secrecy even be required if those 1024- or 2048-bit codes really would take a thousand years to crack? Isn’t the encryption, combined with a hard limit on login attempts, good enough?

Good question.

Alas, the answer is “no.” There are several reasons for this but the largest  by far is that the U.S. government does not want us to have really secure networks. The government is more interested in snooping in on the rest of the world’s insecure networks. The U.S. consumer can take the occasional security hit, our spy chiefs rationalize, if it means our government can snoop global traffic.

This is National Security, remember, which means ethical and common sense rules are suspended without question.

RSA, Cisco, Microsoft and many other companies have allowed the U.S. government to breach their designs. Don’t blame the companies, though: if they didn’t play along in the U.S. they would go to jail. Build a really good 4096-bit AES key service and watch the Justice Department introduce themselves to you, too.

The feds are so comfortable in this ethically-challenged landscape in large part because they are also the largest single employer… on both sides. One in four U.S. hackers is an FBI informer, according to The Guardian. The FBI and Secret Service have used the threat of prison to create an army of informers among online criminals.

While security dudes tend to speak in terms of black or white hats, it seems to me that nearly all hats are in varying shades of gray.

Yet there is good news, too, because IPv6 and Open Source are beginning to close some of those security doors that have been improperly propped open. The Open Source community is building business models that may finally put some security in data security.

The U.S. government is a big supporter of IPv6, yet the National Security Agency isn’t.  Cisco best practices for three-letter agencies, I’m told, include disabling IPv6 services. From the government’s perspective, their need to “manage” (their term, not mine — I would have said “control”) is greater than their need to engineer clean solutions. IPv6 is messy because it violates many existing management models.

The key winners are going to be those companies that embrace IPv6 as a competitive advantage. IPv6-ready outfits in the U.S. include Google, AT&T, and Verizon. Yahoo and Comcast still have work to do. Apple has been ready for years.

Some readers will question why I appear to be promoting the undermining of U.S. intelligence interests. Why would I promote real data security if what we have now is working so well for our spy agencies?

I’m not a spy, for one thing, but if I was a spy and trying to keep my secrets secret I wouldn’t buy any of these products. I’d roll my own, which is what I think most governments have long done. So the really deep dark secrets were probably always out of reach, meaning most low-hanging fruit is simple commercial data like the 125+ million credit card numbers stolen so far this year from Sony, alone.

If the NSA needs my credit card information let them show me why. I think they don’t need it.

We’ve created a culture of self-perpetuating paranoia in military-industrial data security by building systems that are deliberately compromised then arguing that draconian measures are required to defend these holes we’ve made ourselves. This helps the unquestioned three-letter agencies maintain political power, doing little or nothing to increase national security, while at the same time compromising personal security for all of us.

There is no excuse for bad engineering.