Daniel Kahneman deserved to win the Nobel Prize in Economics for his contributions to “behavioral economics.” He and collaborator Amos Tversky (who died before he could receive the award) used psychological insights to explain “irrational” behavior that economists had pre-emptively (and un-realistically) dismissed. Their most important contributions concern “cognitive bias,” i.e., the ideas that we focus on some, but not all, of the costs and benefits to a decision and that we give more weight to potential losses than gains (“prospect theory”)
I read Kahneman’s 2011 book over several months because it was long (499 pages) and thorough repetitive.
My top-line recommendation that that you read this insightful book, but I suggest you take a chapter per day (or week) to allow yourself time to digest — and experience — the ideas. (Alternatively, print this review and read one note per day!
Here are many notes I took:
Kahneman suggests that we process decisions by instinct (System 1 thinking, or “guts”) or after consideration (System 2 thinking, or “brains”). The important points are that each system has its advantage except when they are applied in the wrong situation. It makes sense to order food that “feels right” but not to buy a new car on that basis. A car (or job or house) decision involves many factors that will interact and develop over years. We cannot predict all these factors, but we can given them appropriate weights if we’re patient about listing and considering them
Salespeople will try to put you at ease (using guts) when they want you to trust them on a decision. It makes sense to either be paranoid or, better, take your time to use brains. That said, we make better gut decisions when we’re happy. When we’re sad or angry, we miss our feelings and make (hasty) mistakes.
Kahneman says we often fail to look beyond “what we see is all there is” when considering a situation. This leads to gut responses that may be too shallow to do us any good. (Nassim Taleb discusses these ideas in Fooled by Randomness.) This is why people way-over-estimate the risk of violent death: the media loves to cover exotic stories.
People vote for “competence” (strong, trustworthy) over “likability” when judging candidates on looks, which is what MANY voters do.
People tend to think the beneficial technology has low risks, and that low risk technology brings more benefits. This may explain why most people don’t care about the risks of driving cars (far more dangerous than flying in airplanes) or using cell phones. It also suggests that policy changes (e.g., higher prices for water) will be more acceptable when they are small and reversible. After the sky does not fall, the “low risk” strategy can be expanded. That said, the measuring stick of risk can have a big impact on how people see it.
Bayesian reasoning: (1) anchor your judgement on the probability of an outcome given a plausible base rate, then (2) question the accuracy of your evidence for that probability, as outcomes appear. The idea, in other words, is to take a stand and then reconsider it as new data arrive.
People will pay more for a “full set of perfect dishes” than for the SAME set with some damaged dishes — violating the “free disposal” assumption of economic theory, i.e., we can always dump excess. That’s why a house with newly painted empty rooms will sell for FAR MORE than one with old furniture and paint.
Stereotyping is bad from a social perspective, but we should not ignore the information included in group statistics. Looking from the other direction, people are far TOO willing to assume the group behaves as one individual. (When I was traveling, I learned “not to judge a person by their country nor a country by one person.”)
Passing the buck: People “feel relieved of responsibility then they know others have heard the same request for help.” This fact explains the importance of putting one person in charge as well as the need to automatically ask the person hearing a request for an action and deadline.
“Regression to the mean” happens, e.g., when a “hot streak” is replaced by average performance, but we usually explain that it’s caused by burn out for too much concentration, etc. Try that with coin flips.
“Leaders who have been lucky are not punished for taking too much risk… they are credited with flair and foresight” [p204]. Two of three mutual funds underperform the market in any given year, but lucky managers (and their investors) cling to their “illusion of skill.”
Don’t forget that successful stock traders find undervalued companies, not good companies whose shares may already be overpriced.
Philip Tetlock interviewed 284 people “who made their living commenting or offering advice on economic and political trends.” Their predictions could have been beaten by dart-throwing monkeys — even regarding their specializations. When confronted with these results, they offered excuses; see “Bayesian” above.
“Errors of prediction are inevitable because the world is unpredictable” [p220].
Algorithms are statistically superior to experts when it comes to diagnosing medical, psychological, criminal, financial and other events in “uncertain, unpredictable” domains. (See my paper on real estate markets [pdf].)
Simpler statistics are often better. Forget multivariate regressions. Use simple weights. For example: Marital stability = f (frequency of lovemaking – frequency of quarrels).
“Back-of-envelope is often better than an optimally weighted formula and certainly better than expert judgement” [p226].
Good (trustworthy) intuition comes from having enough time to understand the regularities in a “predictable environment,” e.g., sports competition. “Intuition cannot be trusted in the absence of stable regularities in the environment” [p241].
The “planning fallacy” might lead one to believe the best-case prediction for an enterprise that does not go down that path. Stop and reconsider, using less optimistic weights (read this book).
Overoptimism explains lawsuits, wars, scientific research and small business startups. Leaders tend to be overoptimistic, for better or worse. (Aside: I think men are more optimistic than women, which is why they discover and die more often.)
Want to plan ahead? “Imagine it’s one year in the future and the outcome of the plan was a complete disaster. Write a debrief on that disaster.” This is useful because there are more ways to fail than succeed.
Our attitudes towards wealth are affected by our reference point. Start poor, and it’s all up; start rich, and you may be disappointed. (If you own a house, decide if you use the purchase price or its “value” during the bubble.”) You’re much happier going from $100 to $200 than $800 to $900.
The asymmetry of loses/wins in prospect theory explains why it’s harder for one side to “give up” the exact same amount as the other side gains. This explains the durability of institutions — for better or worse — and why they rarely change without (1) outside pressure of bigger losses or (2) huge gains to compensate for losses. It also explains why it’s hard for invaders to win.
Economists often fail to account for reference points, and they dislike them for “messing up” their models. Economists whose models ignore context may misunderstand behavior.
We give priority to bad news, which is why losing $100 does not compensate for winning $100. Hence, “long term success in a relationship” depends on avoiding the negative more than seeking the positive” [p302].
People think it’s fairer to fire a $9/hr worker and hire a $7/hr worker than reduce the wages of the $9/hr worker. That may not be a good way to go.
“The sunk cost fallacy keeps people too long in poor jobs, unhappy marriages and unpromising research projects” [p345].
“The precautionary principle is costly, and when interpreted strictly it can be paralyzing.” It would have prevented “airplanes, air conditioning, antibiotics, automobiles…”
People may spend more time planning their vacation according to what they PLAN to remember than what they will experience. That may be because we remember high and low points but forget their duration.
“The easiest way to increase happiness is to control use of your time. Can you find more time to do the things you enjoy doing?” (I have the freedom to write this review, but it gets tedious after 3 hours…)
“Experienced happiness and life satisfaction are largely determined by the genetics of temperament,” but “the importance that people attached to income at age 18 anticipated their satisfaction with their income as adults” [pp400-401]. I am fortunate, I think, to have started life with low expectations. That makes it easier for me to make 1/3 the money in Amsterdam that I would in Riyadh. That’s because money is not as important to me as fun experiences.
That said, “the goals people set for themselves are so important to what they do and how they feel that… we cannot hold a concept of well-being that ignores what people want” [p402].
“Adaptation to a situation means thinking less and less about it” [p405].
[Paraphrased from p412]: Our research has not shown that people are irrational. It has clarified the shape of their rationality, which creates a dilemma: should we protect people against their mistakes or limit their freedom to make them? Seen from the other side, we may think it easier to protect people from the quirks of “guts” and laziness of “brains.” (Hence my support for a ban on advertising.)
“Brains” may help us rationalize “guts” but they can also stop foolish impulses — when we acknowledge the limits to our reason (and the information we draw upon to do so).
“Gut” feelings can guide us well if we can tell the difference between clear and complicated or obscured circumstances.
“An organization is a factory that manufactures judgements and decisions” [p417] It’s important, therefore, to balance between its “guts” and “brains” functions.
Bottom Line: FOUR STARS. Skip psychology and read this book if you want to understand yourself and others.