Home > Uncategorized > “No, Mr. Bond, I expect you to die!”: probability, planning fallacy, and why your project is $2 billion over budget

“No, Mr. Bond, I expect you to die!”: probability, planning fallacy, and why your project is $2 billion over budget

Sherlock Holmes is ridiculous. Indiana Jones and James Bond would have long since died. And–believe it or not–the reason why is the same fundamental reason that all kinds of projects end up taking longer than we expect. Intrigued? Read on.

“No, Mr. Bond, I expect you to die!” — And die he ought to have

I am a fan of James Bond (as in a I’ve-collected-all-the-movies-available-on-cheap-VHS level of fan). But probability convinces me he should be dead. Like many fictional pop-culture heroes, he is thrusted into situation after situation in which he barely escapes death. Of course, that’s part of the thrill. The close-escape and its associated thrill is exemplified in one of the Indiana Jones films. After escaping from a steadily collapsing stone chamber, Indiana escapes and snatches his signature fedora before a stone wall slams down on where his hand was moments ago.

The problem? Each close call implies a nonzero probability that the outrageous escapade will fail—which may be negligible once, but not if compounded over and over. And it’s only because we’re really bad at thinking probabilistically that we can suspend our disbelief and enjoy such escapism.

An innumerate savannah

Humans didn’t evolve to be accountants. Nobel Prize-winning psychologist Kahneman proved that we are systematically biased when reasoning with numbers. Two examples follow. Note that in both cases, we do not immediately see the connection between the math and reality.

  • The representative heuristic. If we’re asked to judge if somebody is more likely to be a librarian or salesperson, we base our judgment almost solely on a description of the person and ignore the relative number of people in each profession. If a description seems representative of a stereotype of a librarian, we tend to ignore the fact that there are vastly more salespeople than librarians—even when the relevant numbers are prominently provided.
  • Overconfidence. If we’re asked to judge whether a series of propositions are true, like “Dolphins have gills: true or false?”, or to decide how confident we are that the Earth is between 10 million and 100 million miles from the Sun, we are overconfident. We may be right only 80% of the time we say we’re 95% confident.

The specific failure of quantitative reasoning that is at the core of this blog post is our failure to realize the unlikelihood of a long series of events all coming out successfully {*1}. The implications of this failure goes well beyond fictional heroes, however. This simple bias can also explain why projects tend to run over schedule and over-budget.

The planning fallacy costs humanity billions each year

When the construction of the Stapleton international airport in Denver finally finished, it was two billion dollars over budget. The Sydney Opera House was built ten years late and fourteen times over budget. These may be extreme examples, but it is commonplace that projects take more time and more money (often, many times the initial budget money) than planned. The term “planning fallacy” is shorthand for the pervasive tendency to underestimate completion times or costs. But why do plans fail?

…   and it’s (partly) because we’re bad at probability

Our trouble with understanding probability can help explain why {*2}. Before getting there, it needs to be pointed out that there is one and only one way for things to go according to plan, but many ways for things to be dragged out. That is the simple but inexorable logic that is invariably overlooked. Overruns on breaks, sickness, lack of motivation, red tape; no matter what it is, Murphy’s Law is pervasive. And yet, we don’t always take into account the inevitable wrench being thrown in the works. Why not?

We must look deeper to get to the bottom of it. And a major reason is because we underestimate the cumulative likelihood of many things all coming out heads, figuratively speaking. (In the interest of fairness, there is a psychological component that may or may not be fully explainable by this deeper mechanism I propose, but the planning fallacy per se is beyond the scope of this post.) To illustrate, let us assume that significant projects usually have a large number of stages, and for the sake of argument let’s say that each stage may have a 90% chance of succeeding. The thing that is usually overlooked is that if there are many stages, you have to multiply 90% by itself many times over, and you may end up with a small chance of the project being seen through to the end. The fact of having to compound in this manner is what I mean by “cumulative likelihood”.

Quick recap

To summarize: the planning fallacy arises from many factors, one of which, I propose, is our failure to intuit probabilities in the specific case of compounding outcomes. At the beginning of this post, I illuminated another context in which we’re probably unaware of this cognitive bias: pop culture. In that case, however, it may not be such a bad thing if it allows us to enjoy thrilling heroics. There’s one more context, though, and that is the entire endeavor of rhetoric and argumentation itself

Impressively long arguments suck (don’t trust them)

When we construct elaborate arguments, we don’t realize that more may be worse. Not all the time, of course, but when the “more” is more links or chains in argument, then yes, there may be a hidden danger. If we are sometimes wrong about the validity of making a leap from A to B, and we make then leap from B to C, C to D…. and Y to Z, then the more chances there are to trip. We tend to consider a longer chain of reasoning to be more impressive, and I am saying that we should actually consider exactly the opposite. A longer chain just gives us more to hang ourselves with.

Even I don’t think this should be applied without qualifications, but I do think it follows undeniably that there is a hidden danger in more links or chains in an argument. Sherlock Holmes’ reasoning is a perfect example. I am not doubting the brilliance of that fictional character or any real-world analogues, but I am suggesting that the reason we never hear of such long chains of reasoning being used in real-world crime-solving is because it wouldn’t work, for the very reason I just outlined.

Conclusion #1: how to be the cleverest dick in the room when watching an adventure film

It’s a favorite technique of police dramas to have the protagonists solve the case in the nick of time, arriving just as the culprit was about to claim his next victim. What you can point out is that a close call implies a higher probability of failure.

In other words, by showing us how close a call was, the scriptwriters (or whoever) are showing us just how stupid the protagonist really is.

Conclusion #2: fighting the planning fallacy

I would just like to pass on the advice that by putting a bit of trust in more objective methods, we will come up with better projections than if we focus on the individual case before us. One such “objective” method is to remember or find out how long similar projects have usually taken in the past–which has already taken into account the bumps on the road–and use that when we make plans which, if formulated badly, just might end up costing us an extra two billion dollars.

{footnotes}

*1 This doesn’t necessarily apply, at least not as strongly, if the events are statistically dependent, as they usually are, but that’s beyond the scope of this post, and the broader point still stands.

*2 A missing piece of the puzzle, if you’re interested in pursuing a tangent, is how to think about ‘being bad at probability’ and ‘failing to see the connection between numbers and reality’. Our beliefs fail to propagate – that is, when we learn something new, the implications of that new piece of knowledge fails to fully propagate throughout your entire belief network. Lesswrong.com introduced me to that.

And 3. One more note: This post is edited and re-arranged material, originating as an essay I wrote for a rhetoric class last fall.

Categories: Uncategorized
  1. Mark Hausam
    August 16, 2012 at 7:06 pm

    Good observations!

    “When we construct elaborate arguments, we don’t realize that more may be worse. Not all the time, of course, but when the “more” is more links or chains in argument, then yes, there may be a hidden danger. If we are sometimes wrong about the validity of making a leap from A to B, and we make then leap from B to C, C to D…. and Y to Z, then the more chances there are to trip. We tend to consider a longer chain of reasoning to be more impressive, and I am saying that we should actually consider exactly the opposite. A longer chain just gives us more to hang ourselves with.
    Not all the time, of course, but when the “more” is more links or chains in argument, then yes, there may be a hidden danger.”

    I agree with you here, so long as it is not implied that the fact that a chain of arguments is long inherently should lead us to think of the overall case as weaker. You rightly point out that the longer the chain, the more that can go wrong, and so the more careful we should be in evaluating the overall argument. However, there is nothing inherent in a chain of arguments being long that makes it less cogent and valid reasoning. What has to be done is the same as with a short chain of arguments–each step in the chain must be carefully and independently evaluated. After this is done sufficiently, and if all the links pass the test, it is entirely proper to declare the overall conclusion of the argument sound and valid, etc. If a step or more fails the test, this may invalidate the overall argument, while other steps may still be correct.

    I am not saying your article missed this point. You were careful to be clear, and I think we are in agreement. I just thought this point deserves a bit more emphasis so as to avoid encouraging people to hastily abandon any reasoning that seems too long or complicated to them.

    • August 16, 2012 at 7:33 pm

      Thank you for emphasizing that.
      I agree it should be emphasized, because after all any body of knowledge – say, math or physics – depends on dozens of inferences or ‘inferential steps’ that may not be obvious to just anyone, who might be skeptical of your ‘special knowledge’ (As LessWrong points out, in the ancestral environment, there *were* no inferential steps: tinyurl.com/d92qbyb )

      On the other hand, I’m still not sure what to think….. You see, because I do not trust even the most careful, rational human-verifier 100% on any given step, the problem *will* remain. Somehow. To take it to the limit, I would *never* trust an argument with a million or more steps.
      Like I said, I’m still not sure what to think.

      My first thought would be to have multiple people independently and carefully verify the same long chain of reasoning, thus drastically reducing the probability of human-verification-error.
      My second thought is to have a machine do the verifying, requiring that the knowledge be formalized (i.e. put in a formal system, like math).

  2. August 16, 2012 at 8:52 pm

    This post reminds me of one of the posts on a Less Wrong sequence. The similarities are striking. Well written.

    • August 16, 2012 at 9:23 pm

      Thanks for reading Varun!

      I do like LessWrong and I can see what you mean by *some* similarities, such as discussing probability, planning fallacy, and bias. And perhaps most importantly, writing in an engaging, *human* way. However I haven’t read any Sequences for quite awhile. If you read my other posts you’ll see it’s in my own voice.

      i.e. tell me what you think of a few of my other recent posts! and thanks!

  1. No trackbacks yet.

Leave a reply to adamtpack Cancel reply