Tuesday, April 10, 2012
Why do we refuse to see the flaws in our beliefs?
He discusses Swiss scientist Daniel Bernoulli' contributions to utility theory, and how it has a major flaw: it lacks a reference point. For example, when asking someone to make a decision involving possible gains or losses (e.g., whether one would prefer $100 or a 50/50 chance of winning $200), it ignores where the person stands at that point: what their current wealth level is.
After offering some examples of why this oversight is quite relevant, he writes "All this is rather obvious, isn’t it? One could easily imagine Bernoulli himself constructing similar examples and developing a more complex theory to accommodate them; for some reason, he did not. One could also imagine colleagues of his time disagreeing with him, or later scholars objecting as they read his essay; for some reason, they did not either. The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long."
Kahneman further acknowledges that "I can explain it only by a weakness of the scholarly mind that I have often observed in myself." He names this condition "theory-induced blindness," which means that "once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it."
"Many scholars have surely thought at one time or another of stories ... [that] did not jibe with utility theory. But they did not pursue the idea to the point of saying, 'This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only on present wealth.' As the psychologist Daniel Gilbert observed, disbelieving is hard work."
When I read this part of the book it resonated so much with me; the notion of "theory-induced blindness," and of the various things that we do in our part of the industry, which we accept as gospel, when they have serious flaws.
I have shared some of them here, as well as in The Spaulding Group's monthly newsletter. For example, the fact that the Global Investment Performance Standards (GIPS(R)) continues to require asset-weighted returns, and that some actually champion the use of the obviously and seriously flawed aggregate method. The fact that time-weighting is seen by too many as the only way to derive portfolio rates of return.
I think it's something like the story of the economics professor and the student, who were walking together on campus. The student noticed what he thought was a $20 bill on the ground, but the professor said that it couldn't be a $20 bill, because if it was, someone would have picked it up. Later, the student went back, retrieved it, and bought some beer.
If these methods were flawed, surely someone would have done something about it long ago, right? Those who made these decisions were bright and admired people, and they had to have sound reasons for what they did, right?
Isn't it time we stopped ignoring the flaws, and take the blinders off? We should recognize when we are victims of theory-induced blindness, shouldn't we?
p.s., It occurred to me that an example might help. While driving my wife to her office this morning (we're going to see Mana in concert at Madison Square Garden tonight!), I spoke to her about this post, and used this example, taken from the book.
Mary and Bob both have $2 million. And so, from utility theory they should feel equally happy. However, if you learn that last week Mary had $1 million (meaning her wealth doubled) and Bob had $4 million (meaning it dropped by one half), would you think they feel the same? The absence of a reference point weakens utility theory's assessment.