This book opens with a couple statistics I found surprising and encouraging:
Annual US donations sum to $500 billion—about 2% of gross domestic product—and no less than 23% of Americans volunteer for a good cause1
Their source, the Giving USA report covering 2022, indicates that about $319 billion of this came from individuals (as opposed to foundations, bequests, and corporations), which I think is pretty cool.
I’m not quite as excited about where that $500 billion went. The largest subsector, receiving over 28% of the funds, was “religion”, which the managing editor of the report says “includes giving to congregations, religious media, and missions - it does not include religiously-motivated or inspired organizations.”2 By contrast, only about 10% went to “health” and less than 4% went to “environmental and animal organizations”. I don’t begrudge people for supporting organizations they find spiritually meaningful, but I do wish human health and animal welfare causes were getting a larger slice of the pie than they currently are.
Individual charities vary widely in their ability to achieve their objectives as well as the value of those objectives. The book says “most donations don’t go to the most effective charities—even though they can be at least 100 times more effective than the average charity…”3.
That 100x figure comes from a study that collected opinions from 45 “experts… in areas such as health economics, international development and charity measurement and evaluation”4. I imagined this study might work by selecting various charities, asking the experts to rate the effectiveness of each one, and then comparing the average ratings for each charity. IIUC from chapter 2 and the paper itself, that’s not what they did; rather, they essentially asked each expert to provide an estimate of the gap between the world’s highest-effectiveness and average-effectiveness charities (whatever they may think those are). The study’s main point is that informed people think this gap is radically larger than uninformed people do. I would also be interested to know how much consensus there is among experts regarding the relative effectiveness of specific charities.
Because this gap between average and best is so enormous, the authors “believe that increasing the effectiveness of people’s help is more important than increasing the amount of resources (e.g., in the form of money or time) they allocate to others.”5
If someone switches to one of these highly effective effective approaches, their impact will increase hugely. By contrast, it’s hard to encourage people to increase the amounts that they give to others very substantially. For instance, convincing them to double their donations would typically be a tall order, and yet that would “only” double their impact—much less than we can achieve by increasing their effectiveness.6
I think the book’s target audience is primarily those of us who are already bought in to effective altruism. So it’s less about arguing that people ought to base their donation decisions on effectiveness (though there’s some of that) and more about exploring the social and psychological factors that keep people from doing so—and how we can counter or work around those factors.
Most of this review will be a grab-bag of things I found interesting in the book, but I’ll start with what I felt most resistant toward. In chapter 4 (“Tough Prioritizing”), it emphasizes the need to deprioritize less-effective charities, and is critical of splitting donations as opposed to going all-in on a highly-effective charity (though a later chapter discusses how people’s desire to split donations can be incorporated into a scheme for encouraging more effective donations).
People don’t like to deprioritize charities that they feel are worthy of their help. And yet there is no other choice. If we want to prioritize the most effective charities, we have to deprioritize less effective charities. They are two sides of the same coin.7
Are they? I think there often is an “other choice”: increase your total donation budget. Financing a donation to a new charity doesn’t have to mean reducing your donations to other charities, unless you’re already devoting the absolute maximum amount of your income to charity that you can afford to. Rather than alienating or offending people by telling them they ought to redirect their current giving away from ineffective charities, can we make effective charities seem sufficiently appealing to them that they’ll increase their total giving budget to make room for both? Or are most people already so close to their theoretical maximum charity budget that there’s just not enough juice left to squeeze, so to speak?
Above I quoted the introduction saying “it’s hard to encourage people to increase the amounts that they give…”, which does partly address this, but it doesn’t discuss any evidence on exactly how hard it is. I also quoted a bit about how “convincing them to double their donations … would ‘only’ double their impact”, but note that that doesn’t apply to what I’m suggesting here. If, for example, you’re giving $100 a year to an average-effectiveness charity, and I convince you to donate just a single extra dollar on top of that, but you give that dollar to a 100x-effectiveness charity, then you’ve doubled your impact despite only increasing your donations by 1%. Of course, it would obviously be far better still for you to divert some of your original $100 to the more effective charity instead—I’m just saying that if you’re extremely resistant to doing that, the get-you-to-give-more-overall route might still be very promising.
I don’t think chapter 4 addresses that question, but chapter 6 briefly mentions a couple studies addressing “whether effectiveness information increases total donations”8, which sounded partially relevant to the question I’m asking:
That second one mentions a study by David A. Reinstein which asks exactly the question I’m interested in, right in the title: “Does One Charitable Contribution Come at the Expense of Another?”. It’s pretty technical though and I’ll confess that the title is about as far as I got before my eyes glazed over, so I’m just going to quote from the summary of it in the “effect of effectiveness” paper:
Using a panel data set on charitable donations, Reinstein (2011) finds that larger donors have more “expenditure substitution” in charitable giving. He finds that a temporary shock such as a personal appeal that increases donations to one charity decreases donations to other charities for large donors but has little effect on other donation decisions by small donors. Reinstein suggests that small donors are responding primarily to temporary shocks or personal appeals, while large donors have other motives.11
I’m also not sure the chapter adequately discusses the “diversification” argument in favor of splitting donations. To me, the most persuasive reason for not sending 100% of my donations to the most-effective charity is the risk that the effectiveness estimates could be wildly incorrect. But the brief mention of this argument on page 68 more or less dismisses it without explanation.
Risk aversion is also discussed in chapter 5, but the focus is on why you should maximize expected value assuming that you trust the probability estimates you’re using. But I’m concerned with the uncertainty we face regarding how accurate those probability estimates are. The following comments, from a section about the difficulty of comparing effectiveness across very different cause areas, indicate one way the authors might respond to my concern:
…the large differences in effectiveness between altruistic interventions… are to our advantage. If the differences had been small, then small errors in our estimates of effectiveness would have led us to get the relative effectiveness of different interventions wrong. Such errors would have led us to support less effective interventions over more effective interventions. But since the differences in effectiveness are in fact large, we can get the relative estimates right even if our estimates of the interventions’ absolute levels of effectiveness are somewhat off.12
But I would still worry that there could be fundamental problems in the process of estimating effectiveness that lead to the estimates being systematically and drastically wrong. I think one of the obstacles to effective giving that the book talks about—the “Overhead Myth”—could even be seen as an example of this happening in the past.13 People latched on to the idea that charities which minimize spending on overhead are better. The site CharityWatch, for example, prominently displays the overhead ratio on its page for any given charity. Yet “there is not much of a correlation at all between overhead and effectiveness”14 and in fact “the strong focus on overhead has several negative effects on the charity sector.”15 It sounds like trying to strictly maximize effectiveness would have been a bad idea for people operating under the mistaken belief that low overhead indicates effectiveness. EA-aligned charity evaluators know better than to make that particular mistake, and in general I trust them to produce high-quality estimates of effectiveness—but it still seems unreasonable to completely discount the possibility that their estimation processes are systematically skewed in dangerous ways by some as-yet-unrecognized mistaken beliefs.
Admittedly, I do believe the importance of considering your marginal impact—brought up for slightly different reasons on pages 69-70—is a point against doing this kind of uncertainty-motivated diversification. If the probably-less-effective charities are already well-funded by other people while the probably-more-effective charities are underfunded, it makes sense to try to correct that imbalance by pumping as much money as you can into the probably-more-effective ones. It doesn’t matter that there’s a risk all of your money will be wasted, since the donations of other people to other charities already serves as a sufficient hedge against that risk.
This is related to something I mentioned in my review of The Good It Promises, the Harm It Does: the prospect of all donors immediately diverting all their charitable giving to the top EA-recommended charities could be very scary. But that’s not a good argument against EA, because we’re nowhere near such a total shift happening, and having more donors focus entirely on such charities than currently do would probably be a good thing.
Chapters 1-5 discuss various psychological and social factors that lead people not to give effectively. What follows are just ones that grabbed my attention for various reasons.
Obviously, we often prefer charities that resonate with our “personal connections and experiences”16.
Less obviously, this can be indirect: “People are more inclined to listen to fundraisers who have a personal connection with the cause that they champion.”17
Disasters that suddenly appear are more emotionally salient than persistent or recurrent problems we have with us all the time.18
I think it would be interesting—and I’m sure things like this already exist—to have a news source that pushes us to maintain a quantified and holistic view of the overall state of the world. For example, a regular newsletter that presents statistics in a highly condensed form, largely the same statistics and format in each issue, trying to summarize material conditions around the world and how many people are currently affected by each of a wide range of problems.
…no more than 38% of surveyed American donors did any form of research before donating, and only 9% researched multiple different charities to compare them…19
One reason, the book suggests, is probably that people just assume there’s not drastic variation in effectiveness across charities:
Whereas ordinary people thought that the most effective charities are 1.5-2 times more effective than the average charity, the average expert estimate was 100 times (Caviola et al., 2020)!20
Simply telling them seems to help:
…we looked at the effects of informing people that the most effective charities are 100 times more effective than the average charity… We asked participants how they would distribute $100 between a highly effective charity and a charity with an average level of effectiveness. Among participants who were not informed about the expert-estimated differences between charities, only 37% fully prioritized the highly effective charity, whereas the remaining 63% split their donation across the two charities. By contrast, among participants who were informed about the expert- estimated effectiveness differences, 56% gave exclusively to the highly effective charity, and only 44% split their donation.21
Since it’s not considered obligatory to help in the first place, people who do decide to help are viewed as free to help in any way they choose.22
That seems natural; I think it would be pretty weird to argue that (a) you aren’t obligated to help and yet (b) if you do help you’re obligated to maximize effectiveness. But the book mentions some people who have indeed argued that: Theron Pummer (paper, book) and Joe Horton (paper).23
Anyway, I find it interesting that there is a perceived obligation when people are in certain roles:
…the norm that it is permissible to choose less effective ways of helping only applies to charitable donors, volunteers, and other people who aren’t seen as responsible for outcomes. We have different norms for people who are in a position of responsibility.24
The book references a 2018 paper by Berman et al. in support of this, which found that people thought that it was more important to allocate money on the basis of effectiveness when the person making the decision is a leader of an institution than when they are a donor.
I may be confused (and some stuff in the paper’s supplemental materials document for Study 4 makes this murkier), but it seems like on page 18 the book conflates two separate studies described in the paper. Study 4 appears to ask participants to assume they are the decision-maker—the donor or research center president—while study 5 asks them to evaluate the donor or president’s decision. In Study 4 people mostly ignored effectiveness unless they were the president, as indicated in the book. But in Study 5 people seemed to evaluate both the donor and the president more positively when their decision prioritized effectiveness than when it did not, although it did make a much larger difference for evaluating the president than the donor. There’s also a change in the options presented in the two studies—Study 4 includes “cancer” while Study 5 includes “elderly care” instead—and I’m curious whether that made a difference.
There is a range of studies on scope neglect, showing that our willingness to pay to solve a problem doesn’t increase in proportion to the problem’s size (Desvousges et al., 1993; Dickert et al., 2015; Slovic, 2007).25
The book notes that this is more true in “separate evaluation” situations (“different participants are given different opportunities to help, which they consider separately”26) than in “joint evaluation” situations (where “people can compare different donation opportunities”27). Unfortunately, “in the real world, most decisions to help are made in separate evaluation…”28
I did like this (speculative) attempt to put a heartwarming spin on scope neglect:
…the reason we suffer from scope neglect is not that we don’t care about large numbers of deaths. Rather, it’s because we already feel so strongly about an individual death. Our feelings may simply not get much stronger than that.29
The fact that our feelings don’t scale linearly may also be part of people’s tendency to split donations between more- and less-effective charities:
Though $100 is 10 times more than $10, the subjective utility—the positive feeling—that they derive from donating $100 may not be 10 times greater than the subjective utility they derive from donating $10. Instead, the difference may be much smaller. That would mean that people derive much more utility from the first few dollars they give to a particular charity than from dollars they add to an already significant donation.30
Related to scope neglect is the “identifiable victim effect”, although—this was news to me—“several studies on the identifiable victim effect have not replicated and… we need more high-quality research on this topic…”31
Charities face “little market pressure toward effectiveness and efficiency”, unlike other businesses.32
One question here is, since some donors do know that some charities are drastically more effective than others, why don’t they spread that knowledge faster (which might put pressure on charities to be more effective, lest they lose donors)?
Donors certainly don’t have the ‘can’t wait to tell my friends’ attitude that consumers and investors often have. Why aren’t they more excited about opportunities to do 100 times more good?33
The main answer suggested by the book seems to be scope neglect (see above)—“we don’t feel these differences in impact.”34
The most striking example of altruistic nearsightedness is parochialism: our tendency to prioritize people from our community, town, or country over people who are farther away from us.”35
Encouragingly, the book cites a study suggesting this is partially an issue of ignorance—that people are more willing to give to an international cause when they learn about its higher effectiveness.36 But an element of “pure parochialism”37 remains. Also, other factors like availability bias may be involved.38
A very cool study provided evidence that philosophical reflection can make people less parochial; allow me to quote from its abstract:
Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles.39
The same authors later did another interesting study in that vein, titled “Veil-of-ignorance reasoning mitigates self-serving bias in resource allocation during the COVID-19 crisis”.
Most people are presentists: They prefer helping currently existing beneficiaries over beneficiaries who will live in the future.40
Two of the studies mentioned in relation to this were particularly interesting to me:
As with parochialism and presentism, the book blames our bias toward our own species on a combination of factual beliefs and pure preferences.
One study found that, within a given species, people “consistently prioritized the more mentally advanced animals, providing support for the view that people consider mental capacities morally relevant and that this could be part of the reason they prioritize humans over animals (Caviola, Schubert, et al., 2022).”42
Unfortunately, we sometimes revise our beliefs to fit our behavior, rather than the other way around:
Brock Bastian and his colleagues have shown that when meat-eaters are reminded that eating meat causes animals to suffer, they tend to deny that animals have minds (Bastian et al., 2012). This suggests that they engage in motivated reasoning: They perceive animals’ mental capacities to be weaker than they are because they don’t want to change their views of animals’ moral value.43
The book mentions several intriguing studies on speciesism; here’s a particularly striking claim:
…a 2019 paper… showed that speciesism is a temporally stable psychological construct with clear interpersonal differences and that it predicts a range of behaviors, including food and donation choices. They also found that speciesism correlates with other forms of prejudice, such as racism, sexism, and homophobia.44 [link to paper]
Another study “found that children between the ages of 6 and 9 are much less likely to prioritize humans over animals than adults are”45; and another, “that when people are prompted to think more deliberately and less emotionally, pet speciesism grew weaker, whereas anthropocentric speciesism grew stronger.”46
…though the discourse about charity overhead tends to be overwhelmingly negative, money spent on overhead isn’t necessarily wasted. Charities often need to make investments that don’t go directly toward their programs to be effective. Thus, unbeknownst to many donors, increasing overhead sometimes even increases effectiveness.47
The book cites some evidence that people’s preference for low-overhead charities is partly due to the mistaken belief that overhead “is identical to, or a good proxy for, effectiveness”,48 but also partly a pure preference—an “overhead aversion”49. A couple speculative explanations for this are proposed, including:
…that people prefer low-overhead charities because they think they’re less likely to be wasteful and corrupt… They may hold a special aversion to waste and corruption, making them willing to pay a price in terms of reduced effectiveness to ensure that no part of their donations is lost in that way.50
This summary of an unpublished study is kind of maddening:
They informed participants that they could either donate directly to their favorite charity or support a company that fundraises for that charity. The fundraising company was said to employ professional fundraisers with a strong track record. If a participant gave the fundraising company $1, it would raise $10 for the charity. Participants could thus expect to have a 10 times larger impact if they allocated their money to the fundraising company rather than directly to their favorite charity. This logic was spelled out to the participants, and thus it was made clear that they would have a larger (albeit more indirect) impact if they chose to support the fundraising company. Despite that, Caviola and Lewis found that only half of the participants chose to support the fundraising company, while the other half preferred donating directly to their favorite charity.51
I would not be surprised to learn that I’m susceptible to this particular form of bias that they speculate about: “it’s possible…that people underestimate indirect impact precisely because they have an intrinsic aversion to it (i.e., that they engage in motivated reasoning).”52
To counter the idea that we can’t compare the effectiveness of charities working on very different causes, the book discusses QALYs and WELLBYs. This section seems more focused on convincing the reader that these are useful metrics than investigating acceptance/rejection of them from a psychological perspective.
Chapters 6-8 are about ways we might try to spread effective altruism, some of which I discuss below. I’ve skipped chapter 9, which contains advice for the reader on how to personally engage with effective altruism (I think most people who are already part of the EA community will be familiar with most of that material already).
For several of the obstacles it mentions, the book also describes studies where providing participants with a little extra information or guidance resulted in a non-trivial improvement in their decisions about (hypothetical) donations. Here’s a particularly encouraging result:
In the experimental condition, we let participants read a short paragraph explaining the concept of charity effectiveness and the fact that some charities are much more effective than others. We also told them about the charity evaluator GiveWell and added a link to GiveWell’s website that lists some of the world’s most effective charities. However, participants were free to ignore this information, and we didn’t force them to go to the website. Nevertheless, when we asked these participants where they would donate, we found that this information made a major difference. Whereas no one chose one of the most effective charities in the control condition, 41% (84/ 207) of the participants who had been thus informed did. This suggests that surprisingly many people would want to give to effective charities if they only knew about them.53
One of the book’s authors co-founded a site called Giving Multiplier where you can make a donation that will be split between two charities of their choice. You have total freedom to choose one of the charities, while the other must be chosen from a short list of charities deemed effective. You can choose what percentage of your donation goes to each. The incentive to use the site is that your donation will be matched. The larger the percentage of your donation that you devote to the effective charity, the larger the match will be. They report:
Over $1.8 million has been allocated to the recommended effective charities, and of those, it is estimated (based on donor surveys) that nearly $1.6m would not have been donated to effective charities if it weren’t for Giving Multiplier.54
The book’s authors also cowrote an EA Forum post titled “What psychological traits predict interest in effective altruism?”; that research is the main focus of chapter 7.
One really interesting finding to me is that the following two traits:
…can be collapsed into a single trait, which the authors call expansive altruism:
Though these two features are conceptually distinct, they correlate so strongly that they should be seen as a single factor from a psychological point of view, according to our research. People who are more impartial with respect to the relationship between themselves and others (i.e., who are less selfish) are also more impartial with respect to different groups of others, whether they’re close or distant.57
They also describe a separate trait they call effectiveness-focus: “the inclination to make tough trade-offs and deprioritize less effective ways of helping, even if they are close to your heart”58
They emphasize that expansive altruism and effectiveness-focus are “only correlated weakly” with each other.59 It’s the people who score high on both that will be most drawn to EA.
The point of all this is: “…outreach efforts could specifically be targeted at those who are more open to effective altruism.”60 The chapter doesn’t discuss concretely what such outreach might look like, though.
One of the other traits they mention as possibly relevant is called “actively open-minded thinking”. I find this really interesting:
Actively open-minded thinking is… a measure of how people think that one ought to think, not a measure of how they actually think. Nevertheless, it has been shown to predict relevant behavior.61
They reference a paper called “The role of actively open-minded thinking in information acquisition, accuracy, and calibration”. It’s kind of wild that just asking someone whether they agree with statements like “Allowing oneself to be convinced by an opposing argument is a sign of good character”62 could “predict performance in an estimation task”63.
Can we convince more people of EA principles using arguments? Chapter 8 discusses a few studies that give at least some hope:
In sum:
The contemporary literature on the effects of moral arguments on charitable giving and related forms of behavior is relatively small. The evidence from the studies we have covered is mixed…
In any event, so far we’ve not found evidence to suggest that giving reason-based arguments would by itself sway people to help effectively on a large scale. Another reason to be skeptical… is that if such arguments made a huge difference, one would have expected more people to have become effective altruists by now.69
This is followed by a very inconclusive discussion of the possibility of changing social norms in an EA-friendly direction even if winning most people over via argument is impossible.
One observation I found thought-provoking:
Several of the successful movements we’ve discussed—such as those fighting for same-sex marriage and racial equality—have very salient injustices to point to. … By contrast, many of the groups that effective altruists support—like the global poor, animals, and future generations—are distant and less salient. That probably makes a powerful norm cascade in support of them less likely.70
This got me thinking about how, in regard to issues of global health and poverty, EA advocates (or at least, I myself) tend to avoid even framing them as matters of justice. For example, when I talk about a person’s preference for saving one member of his own community/nation/etc rather than a much larger number of lives elsewhere, I’m drawn to terms like cognitive bias; I frame his preference as a sort of intellectual foible that has regrettable consequences, but isn’t a stain on his moral character. But if I met someone who preferred saving one member of his own race rather than a much larger number of members of a different race (all other factors being equal), my instinct would be to just condemn him for being a racist. (This also relates to the topic of supererogation discussed earlier: intuitively, it seems like even in a situation where you have no obligation to help anyone at all, if you do choose to help you’re obligated not to let racism influence your choice of who to help.) You could imagine taking a similar approach to the parochially-biased man: instead of using cold terms like cognitive bias, you could call him a bigot for valuing outgroup-members’ lives less than ingroup-members’. Would such an accusation of bigotry be justified? Would voicing it help or hinder the goal of spreading a social norm against parochialism? Are those two questions (whether it’s justified and what its impact would be) closely connected or independent? Would the answers be different under different social conditions?