review of Woo-Kyoung Ahn's book Thinking 101
I like reading books about cognitive biases periodically, even if it's mostly familiar material. I'm hopeful I'll eventually get better at detecting when they're clouding my judgment. (I don't think it's working yet.) This is a solid entry in the genre, though not my favorite.
Explanations. Ahn tries not just to describe the biases we have, but to explain why we have them. These explanations seem speculative but they are fascinating:
- Illusion of fluency: Why do we overestimate our ability to do something after we've read about it or watched others do it? Because our brains need a way of "knowing whether [we] know something", and one heuristic we use for that is how familiar we are with it. But when (as in an example Ahn gives) we merely watch someone perform a dance, we become familiar with the dance without actually building the skill of performing it, so the heuristic misfires.
- Confirmation bias: Why do we seek evidence that would confirm what we already believe, when seeking evidence to disconfirm it is more helpful in actually finding the truth? It "might be a side effect of meeting our need to satisfice, stopping our search when it's good enough in a world that has boundless choices."
- Loss aversion: Why are we often more motivated to keep a thing once we have it, than we would be to acquire the same thing if we didn't? Possibly "our ancestors lived so close to the margins of survival, where losing something meant dying, so they had to prioritize the prevention of potential losses."
Strategies. Here are some of the book's suggestions for dealing with one's own biases. They seem sensible, though I was hoping for more.
- Confronting the illusion of fluency: Make yourself do the thing so you can assess your true competence level:
Some people think they're trying out skills when they're simply running the process in their heads and not using their physical muscles. When you imagine doing those dance steps or giving a presentation to your client, you are reinforcing the illusion. Everything flows smoothly in your mental simulation, feeding your overconfidence. You have to actually write down your presentation word for word and speak out load using your tongue and vocal cords, or enact every movement of the dance using your arms, legs, and hips.
- Accounting for the planning fallacy: Encountering unforeseen obstacles is routine, so Ahn recommends assuming stuff will take you 50% longer than you expect. One strategy that she warns is not reliable is creating more detailed plans: she discusses one study in which "step-by-step plans exacerbated the effects of the planning fallacy" (it seems like participants were overly optimistic about the individual steps, and thus became overconfident about the plan as a whole), but another in which it helped.
- Pitting confirmation bias against itself: Since we naturally seek evidence in favor of whatever hypothesis we're considering, it's helpful "to consider not just one but two mutually exclusive hypotheses and try to confirm both." Relatedly:
...ask a question framed in two opposite ways. For instance, in thinking through how happy you are with your social life, you can ask yourself whether you are happy or whether you are unhappy. These two questions inquire about the same thing, and should elicit the same response—like "I'm sort of happy"—no matter how the question is framed. Yet if you ask yourself whether you are unhappy, you are more likely to retrieve examples of unhappy thoughts, events, and behaviors. If you ask yourself whether you are happy, you are more likely to retrieve opposite examples.
- Using randomness to fight confirmation bias: Ahn mentions an app that software engineer Max Hawkins created that Ubers him to random locations. Randomly doing something different than what you normally do increases the odds that you'll find evidence disconfirming your existing beliefs. I like Hawkins' statement about his practice of "Randomized Living" on his website: "I believe giving in to chance is a way to liberate yourself from personal and social programming that traps you in a narrow sense of self."
- Admitting we're bad at perspective-taking: Ahn cites multiple studies showing how astonishingly bad we are at correctly interpreting other people's words and accurately imagining what the world looks like from their point of view. We suck at it, we suck at it worse than we think we do, and it's not clear that there's any quick or easy way to get better at it. Thus, she advises:
Stop letting others guess what we think and just tell them. ... Likewise, stop trying to read people's minds and feelings. If you are a compassionate and accommodating person, it is particularly hard to resist the temptation to guess others' thoughts. But study after study has shown us how disastrous this can be. The only sure way to know what others know, believe, feel, or think is to ask them.
- Motivating yourself to wait for delayed gratification: Imagining a future period of your life in detail can give you more self-control. Evidence for this includes a study that asked people "to list events they had planned over the next seven months" and one in which women "listened to audio recordings of their own musings on good things that could happen to them in the future".
Examples. Ahn uses several politically-charged examples. That makes sense; politics is perhaps the area of life where clear thinking is most important, since poor decisions hurt not just individuals but entire societies. But it's also risky. For most contentious issues, I don't think you can really cover all the relevant considerations in a short discussion. Pointing out that one side is committing a particular fallacy and declaring the other side the winner is overly simplistic, and problematic for two reasons. Those who don't already agree with you will focus on the weaknesses or omissions in your discussion, and may dismiss your entire work as biased. Those who do already agree with you may develop the illusion that getting the right answers is easy, and subconsciously think of cognitive biases as primarily tools for explaining why the other political party is so stupid, rather than as problems that almost certainly affect their own thinking too in ways that are extremely difficult to detect and overcome.
In particular, consider Ahn's discussion of confirmation bias causing underrepresentation of women in science. She establishes the problem by citing a single anecdote about an award ceremony, not statistics. She makes broad statements like "society believes that men are better at science than women" and "[w]hen male students say something insightful during a seminar or in a class, they receive more compliments than female students who say similar things" without providing citations. Then to establish that this is depriving society of scientific advances, she again uses just a single anecdote (about the fact that the first page of results when you do a search for "scientists who developed COVID-19 vaccine" mostly returns women). The whole section comes across not as a serious investigation of a hypothesis, but as a recitation of the first few things that came to mind in support of a conclusion the author already believed in. To be clear, I'm not saying she's wrong. But sloppy arguments are dangerous even when the conclusions are correct. If someone who is skeptical of gender discrimination in science reads this part of the book, I suspect it will make them more skeptical, by making it easier for them to assume that concerns about gender discrimination are typically rooted in lazy thinking.
An example I did like was how job interviews encourage overreliance on a single sample:
...given that face-to-face interactions are vivid, salient, concrete, and memorable, interviewers think they are observing who the candidate truly is, rather than a biased portrayal of the person tinted by random factors. And this impression of a small sample of qualities on exhibit that particular day can make the decision-makers ignore the records that more accurately reflect the candidate's skills, demonstrated over many years. A person who looks amazing and brilliant during an interview may not be as awesome once they are hired. Given regression toward the mean, that is what we should, to some extent, expect. And a person who didn't perform brilliantly in an interview ... could turn out to be a big catch the company missed.
Studies. Some interesting studies Ahn references:
- Kardas & O'Brien 2018: Repeatedly watching a video of Michael Jackson do the moonwalk increased people's belief that they could do it, but not their actual ability.
- Fisher et al 2015: People who had recently used Google to answer some questions had more confidence in their answers to other, unrelated questions than people who hadn't.
- Wason 1960: Peter Wason's 2-4-6 task neatly demonstrates confirmation bias. (If you're not familiar with it, click here to try it.)
- Ahn et al 2019: College admissions officers valued the absence of B+'s more than the presence of A+'s when comparing students with identical GPAs.
- Fryer et al 2012: Promising teachers a $4000 bonus if their students' test performance improved was not effective. But giving them a $4000 bonus up-front which they'd have to pay back if student performance didn't improve had a significant effect. (Ahn isn't endorsing this as a policy recommendation, just using it as an illustration of negativity bias.)
- DeWall et al 2015: Acetaminophen removed people's tendency to assign a higher sale price to objects they own than objects they do not.
- Kahan et al 2017: When interpreting made-up data about the effects of a skin cream, people who had demonstrated higher numeracy were better at making correct inferences from the data than others. However, when the same data was labeled as being about the effects of gun control, Ahn notes that "people with stronger quantitative reasoning abilities used them only when the data supported their existing views."
- Savitsky et al 2011: People were no better at interpreting the meaning of ambiguous sentences when they were spoken by a friend or spouse than when they were spoken by a stranger.
- Birch & Bloom 2007: A complicated experiment demonstrated that adults, not just toddlers, have difficulty ignoring facts that they know but that someone else does not when trying to predict how that person will behave.
- Eyal et al 2018: 24 experiments showed that just mentally trying to view things from someone else's perspective doesn't help you reach accurate conclusions about them at all.
- Grosch & Neuringer 1981: Pigeons are more willing to choose a better delayed reward over a worse immediate reward if they have a superfluous task to distract themselves with during the waiting period.