undoingThere are more things to talk about than Donald Trump, though I doubt that Donnie agrees with me. But we have to get on with our lives which, at least in my case, means getting on with my reading. Where does all the crap I write here come from but reading, talking to people, and waiting in line at Starbucks? Nowhere else! And if you want to be like me you may choose to read a new book by Michael Lewis, The Undoing Project: A Friendship That Changed Our Minds. Of course the book is very good and it’s very well-written and it will tell you a lot about how decisions are actually made. But if we are looking forward instead of backward here, the book and its content don’t really matter that much because we don’t decide nearly as much as we think we do. We don’t decide as much as we used to. In fact I’m about to argue that we’re well into the Post-Decision Age. It’s pretty much out of our hands.

Lewis’s book explains. He’s not breaking new ground but rather rediscovering old ground and explaining why it matters. In this case his earlier book Moneyball explained how the Oakland Athletics baseball team used statistics to win baseball games while this new book essentially takes the other side and explains why most of us (including many baseball managers) are not like the Oakland A’s.

From a content perspective this turns out to be a book about a book. Lewis explains and puts in a dreamy bromantical context the work of two academics, Daniel Kahneman and Amos Tversky, that was already shared with the world in Kahneman’s 2002 book Thinking, Fast and Slow. Read both books. If you can’t or won’t do that, then at least look at this charming video review of Kahneman’s book that explains the basics. If that gets you excited you can watch an entire hour of Kahneman discussing the same subject at Google. Finally, if you want a taste of the Lewis book here’s an excerpt from Vanity Fair.

What Kahneman and Tversky figured out is that we have ancient brains that generally don’t do the math because math has only been around for 5,000 years or so while we as a species have been walking the Earth for 2+ million years. Our decision-making processes, such as they are, are based on maximizing survival not success. Early hunter-gatherers weren’t so bothered with optimization as just not being eaten. This put an emphasis on making decisions quickly.

We see fast-versus-slow decision-making in many aspects of life. I used to teach archery — yes, archery, who would have thought it? — and fast-versus-slow is at the very heart of that sport. In archery there are sight shooters and instinct shooters. Sight shooters are the archers you see in the Olympic Games. They take up to a minute to very carefully release one arrow at the exact correct moment. Instinct shooters release the arrow when it feels right and shoot many times as many arrows as a result. Sight shooters win all the medals. Instinct shooters save your ass in a battle. It’s all about maximizing survival.

Back in the early 1980s I wrote about a retired engineer who played the horses at Bay Meadows, a thoroughbred race track in San Mateo, California. He had developed an expert system to guide his betting, crunching out the results on his Cromemco computer. That engineer was the horse racing equivalent of a sight shooter where nearly all the other bettors worked entirely on instinct. And because horse betting is a parimutuel system where the bets immediately affect the odds, he wasn’t so much betting on the horses as betting against the other people at the track. If he bet correctly and the horde of bettors bet incorrectly he could make a lot of money. His betting system worked well and the guy was miserable as a result.

Here’s one reason why he was miserable. His personality was that of an instinct shooter but he was forcing himself to be a sight shooter. It made sense, he understood it, but couldn’t he just once give-in to a whim and increase his bet on Nobody’s Fool in the 4th race? After all, just look at that beautiful horse, and the jockey is wearing my favorite color! Nope. One emotional bet could wipe-out an entire day’s results. His system was conservative and made a consistent eight percent per day so don’t screw with it.

The other reason he was miserable was that very eight percent per day. His optimal approach would have been to bet fairly large sums, keeping them just small enough to not seriously impact the parimutuel odds, then re-invest his winnings to gain what the lady at the bank called “the miracle of compound interest.” He figured the track was good for up to $500,000 per year in easy winnings based on about four hours of work per day, 100 days per year ($1,250 per hour!). But taking $5000 per day every day at the betting window would gain the notice of unsavory characters who would want to either steal his winnings or his system. So he bitterly kept his winnings down to $500 per day ($125 per hour). His need to survive forced down the engineer’s winnings.

The engineer’s expert system predated off-track betting, so I’m guessing he or some descendent is making great money today sitting on a bar stool in Vegas. The variables he used haven’t changed so the system should still be good. Which is to say the other bettors haven’t got any smarter. Why should they? Our brains haven’t evolved.

Lewis presents a very interesting example of decision-making tradeoffs based on an actual example. Henry Kissinger was trying to achieve peace between Israel and Syria and the Israeli government asked Kahneman and Tversky to recommend possible decisions based on likely outcomes of Kissinger’s work. For example, they estimated that a failure by Kissinger would increase the likelihood of a new war by about 10 percent. Kissinger fails and war is 10 percent more likely.

“Foreign Minister Allon looked at the numbers and said, “Ten percent increase? That is a small difference.” Danny was stunned: if a 10 percent increase in the chances of full-scale war with Syria wasn’t enough to interest Allon in Kissinger’s peace process, how much would it take to turn his head? That number represented the best estimate of the odds. Apparently, the foreign minister didn’t want to rely on the best estimates. He preferred his own internal probability calculator: his gut. “That was the moment I gave up on decision analysis,” said Danny. “No one ever made a decision because of a number. They need a story.” As Danny and Lanir wrote, decades later, after the U.S. Central Intelligence Agency asked them to describe their experience in decision analysis, the Israeli Foreign Ministry was “indifferent to the specific probabilities.” What was the point of laying out the odds of a gamble if the person taking it either didn’t believe the numbers or didn’t want to know them? The trouble, Danny suspected, was that “the understanding of numbers is so weak that they don’t communicate anything. Everyone feels that those probabilities are not real—that they are just something on somebody’s mind.”

Which brings us finally to the Post-Decision Age.

Thinking about decision analysis and how it simply isn’t compelling to decision makers, there’s one place where I believe that’s not true — Google. At Google it is all about the algorithm and the algorithms are (deliberately, I’m beginning to think) so complex that the whole issue of Kissinger’s failure leading to a 10 percent increase in peril is avoided. It’s avoided because the probability relationship is too complex to be stated in a single sentence and so nobody involved even bothers to decide whether the analysis is actionable or not: they just do it. At Google they do what the algorithm tells them to do. So the algorithm is, itself, in charge until enough time passes that a preponderance of data makes it clear the algorithm has failed. But even then they don’t reject the algorithmic approach, they just revise the algorithm.

This, I believe, is the trend. As humans we’re pretty much all instinct shooters but the optimization of complex systems requires sight shooters. If we can’t become those we use machines to do the work. And if the machines fail we don’t reject them, we improve them.

On October 5, 1960, the U.S. nuclear command center NORAD received signals from its early warning radar in Thule, Greenland, indicating that a massive Soviet nuclear attack on the U.S. was underway—with a certainty of 99.9 percent. What the radar was actually seeing was the Moon rising over Norway. Luckily, nuclear armageddon was somehow averted but we didn’t throw out the computer, we taught it about the Moon.

The print version of this gig began for me 29 years ago in 1987. Reagan was President and you could still buy a new IBM PC-AT. And that fall Wall Street suffered its first Flash Crash. “Black Monday” is what they called October 19, 1987 when the Dow dropped 22 percent in one day. That was 508 points back then, equivalent to a 4,180 point drop tomorrow. Are you ready for a drop like that? Nobody is.

The crash was primarily caused by program trading — computers deciding to sell stocks in order to minimize losses and conserve portfolio value. No humans were really involved until it came time to fix the mess. The problem was every trading program was ignorant of the fact that there were other trading programs. The result was that stocks would drop, programs would sell some shares, stocks would drop even more because of those sales and the sales of other programs, so the programs would sell even more. Wash, rinse, repeat. They were selling against each other and would have taken the market to near zero if humans hadn’t intervened to stop trading. There were changes made to program trading but it wasn’t eliminated. In fact program trading today supplies most of the volume on Wall Street.

So we can make policies but we can’t implement them at any scale without help. And with the rise of machine learning, Big Data, and the relentless advance of Moore’s Law, computers have become so fast and so smart that we think they can sight shoot at instinct speeds. And maybe they can, but it’s for sure that normal humans no longer understand the underlying algorithms and may be unable to regain control.

We still make decisions. We decide what to wear and what to eat and maybe where to work or go to school, but most of the decisions that are made about us are made by machines. (Is the IRS going to audit you this year? Will your kid be accepted to Ohio State University?) And this trend has so deeply infected society that I think it can never change.

Let me give two backward examples — Apple and IBM. IBM has screwed itself by blatantly ignoring technological realities that are obvious (and machine derived) at Amazon, Google and Microsoft. IBM is Old School in its decision-making and suffering dearly for it. Apple, on the other hand, appears to be paralyzed. Fortunately Cupertino is for now locked on a profitable path, but are those guys making decisions at all? Not really, not much, and it will eventually bite them.

Welcome to the Post-Decision Age. What do you think?