Nussbaum’s rejection of utilitarianism only occupies one chapter of her book Justice for Animals, but my comments on that chapter got too long to fit in my main review of the book, so I’ve split them into this separate post.
Nussbaum summarizes one version of utilitarianism as the view that we should “maximize the net balance of pleasure over pain in the universe,”1 which is a good enough definition for the purposes of this post. In what follows, I discuss six objections to utilitarianism that seem to be present in the chapter.
Nussbaum suggests pleasure is “a feeling that is closely linked to activity, so closely that we can’t pry it apart from the activity and measure it on its own,”2 and that “you can’t get [the pleasure] without pursuing [the] activity.”3
Sure, there are many pleasurable feelings that we don’t currently know how to induce in ourselves except by engaging in some particular activity, but that’s not strong evidence that there is no other way. (By analogy, for much of history we didn’t know how to produce babies without having sex; but that was not strong evidence that artificial insemination is impossible.) There are reasons to believe sufficiently advanced technology would allow separating our feelings from our activities:
But for the sake of argument, let’s assume some pleasures are genuinely inextricable from their corresponding activities. Nussbaum says this implies that “the entire project of maximizing net pleasure is in trouble from the start.”4
But she doesn’t explain why, and I don’t see it. If pleasure is inextricable from certain activities, wouldn’t that just mean a utilitarian should try to maximize the chances that those activities will occur? Isn’t that already the form taken by most concrete policy changes supported by utilitarians?
For example, many utilitarians are concerned about hens kept in cages too small for wing-flapping, and therefore support cage-free egg production. A policy of banning cages does not force the hens to feel whatever it is they feel when they flap their wings; it just creates an environment where the hens are more likely to engage in the activity that leads to that feeling. If we had a simpler or more reliable way of producing the feeling—a magic wand that made all the world’s hens happy in their cages—we utilitarians might prefer that option instead of banning cages. But such shortcuts are rarely available.
Perhaps Nussbaum brought up “activity” only to illustrate that pleasures have qualitative differences:
The pleasure of eating a delicious meal seems very different from the pleasure of holding a beloved child, and both are different from the pleasure of learning and studying—and so forth.5
One way to spin this into an argument that “the entire project of maximizing net pleasure is in trouble from the start”6 would be to claim there’s no way of weighing one kind of pleasure against another. (I don’t know if this is what she had in mind.) If so, then “maximizing net pleasure” would be impossible because “net pleasure” would be an undefined quantity.
But it’s not plausible that pleasures are totally incommensurable. Maybe you don’t know how to compare “eating a delicious meal” with “holding a beloved child”, but you can probably recognize that the delicious meal beats an energy bar, even if both are enjoyable. You probably also have an intuition about whether you’d prefer a lifetime of eating mediocre meals alongside your adoring children, or a lifetime of eating alone at the finest restaurants. What would bring you the most enjoyment at any given moment depends on lots of factors—your personal preferences, your mood, the novelty or familiarity of the experience, etc.—but this doesn’t prevent you from recognizing that you’re enjoying yourself more in some moments and less in others.
Nussbaum also brings up the Experience Machine thought experiment. Utilitarianism says you should get in the machine, and Nussbaum thinks this is a failure to respect people’s and animals’ preference for “being the author of their own actions”7 as opposed to passively receiving pleasure.
I’m content to bite the bullet on this. As a determinist I don’t think anyone is the ultimate author of their own actions anyway; having your choices determined by a machine isn’t intrinsically worse than having them determined by the laws of physics combined with the configuration of matter in your brain.
I know lots of people are less sanguine than me about getting in the Experience Machine, but I was surprised by Nussbaum’s conviction that putting animals into it would be wrong. She says, “Most animals like doing things; being the author of their actions matters to them.”8 How could we know that? How could we distinguish between animals liking the feeling that they are doing things (which the Experience Machine would give them) from animals liking actually doing things? Is there any reason to think they’re even capable of understanding the difference and forming a preference? (The willingness of rats to seek direct brain stimulation9 might be evidence that they aren’t so picky about where their pleasure comes from.)
See also: Rawlette’s thoughtful defense of plugging into the Experience Machine in The Feeling of Value. (Personally, my biggest qualm about the machine is not about agency, but about the authenticity of relationships. For example, is the feeling of having a great conversation with a friend still as valuable if there is not actually anyone experiencing the other side of the conversation?)
Nussbaum is concerned that utilitarianism would justify the mistreatment of animals, “from factory farming to the fur industry,”10 on the grounds of the pleasure that mistreatment brings to humans. But I’ve never heard a utilitarian make such arguments, and contemporary utilitarians have been very prominently associated with veganism. Once you hear about the level of cruelty that factory farming involves, it’s hard to claim that the joy we get from consuming the products is even remotely comparable to the suffering inflicted on the animals. (If you had to trade places with a typical broiler chicken for four weeks before you were allowed to eat a chicken dinner, KFC would be bankrupt in no time!)
Nussbaum worries that utilitarianism would fail to condemn oppression when the oppressed have adapted to their conditions. Examples: “Women in sexist societies often learn not to want things that society denies them….”11 and “animals raised in zoos from birth may not feel pain and dissatisfaction about their lack of free movement or social company, since they have never experienced these things…”12 But I think utilitarianism does condemn such repression, for at least three reasons.
Most importantly, I’m skeptical that adaptation is thorough enough and universal enough to eliminate the suffering from such situations. Some women may be totally content to live under sexist constraints, but for many those constraints are a source of tremendous suffering. This can be true even in cases where they wouldn’t consciously attribute their suffering to those constraints—we humans often feel stress and other negative emotions without understanding why. Why wouldn’t animals, too?
Forcing people/animals to adapt often requires hurting them. Nussbaum mentions that the victims in both her examples “are rewarded … for docility and punished for protest and aggression”13, but fails to point out that such punishments must be factored into utilitarian calculations. Of course, the rewards have to be factored in, too. But even if those rewards bring enough pleasure to outweigh the suffering caused by the punishment, utilitarianism would still condemn this arrangement unless there were no other way to achieve at least as much net pleasure. For example, suppose I kick you in the shin and then give you a million dollars. Most people would say the money more than compensates for the temporary pain. But if I also had the option of just giving you the money without assaulting you, I can hardly claim that utilitarianism endorses the assault!
Finally, utilitarianism also cares about maximizing happiness, not just eliminating suffering. Even if we (implausibly) assume that the victims in these examples truly aren’t suffering, they’re clearly still missing out on experiences that could bring them great joy, and utilitarianism cannot condone such deprivations unless they are necessary for achieving some even greater good.
This is one of the most well-known objections to utilitarianism, and the one I’m most sympathetic to:
The distribution of pleasure and pain is not taken into account. Good aggregate results can be produced in a variety of ways, some involving great misery for those at the bottom of the social scale.14
One manifestation of this is that utilitarianism…
…could justify bringing into the world creatures, of whatever species, whose lives are extremely miserable, just so long as those lives exhibit a slim net balance of pleasure over pain.15
I think there are three separate concerns implied here.
It might seem like utilitarianism permits you to do something of strongly negative value as long as you offset it by doing something of even higher positive value. This is a misunderstanding. Utilitarianism requires you to consider all the options available to you and choose the best one. Doing something harmful to achieve something good is only acceptable if there’s no less harmful way to achieve the good thing, and if there’s no alternate course of action that would have an even higher net value.
When we imagine a life with “a slim net balance of pleasure over pain”16, we might question whether it’s actually worth living, and doubt that pleasure can really outweigh pain. Keep in mind, though, that such a “net balance” doesn’t just mean having more happy moments than sad moments; intensity has to be taken into account, and the intensity of typical real-world suffering may be much greater than the intensity of typical real-world happiness. A pig that has a great meal one day and gets boiled alive the next day surely does not have a net balance of pleasure; utilitarianism would not say the meal justifies the excruciating death, even if the former had the same (or longer) duration in time.
To accurately imagine a “net balance of pleasure”, you need to imagine a bucket of pleasures and pains to which you could honestly say, yes, if that were my life, it would be worth living. That might be a much higher bar than it sounds like at first. Some pains are extremely difficult to offset: think about what bucket of good experiences you’d need to have before you’d say, yeah, I’d be willing to be boiled alive for that. (If you really think nothing could possibly offset it, maybe you’re a negative utilitarian.)
We’re naturally skeptical that one person’s bliss could compensate for another’s suffering. The realities of human psychology provide some justification for this skepticism. For one thing, the same bad experiences can cause much more suffering when they feel like an inescapable, recurring aspect of one’s life than when they are exceptional, fleeting events. Also, people are biased toward overvaluing their own pleasure and underestimating the severity of others’ suffering. Utilitarianism requires us to take those factors into account, so in practice it does care a great deal about distribution.
But it’s true that in at least some circumstances utilitarianism will tell you to place burdens on one person so that another person may benefit. This is disturbing, and it’s a good reason to retain some skepticism about utilitarianism. However, the opposite view—that you can never justify a burden on one person via a benefit to another—would also be disturbing (see my review of Anarchy, State, and Utopia). I think the whole idea of morality in general depends on an implicit acceptance that one person’s interests can sometimes be sacrificed for the sake of another’s: any time you choose not to do something you feel like doing, on the grounds that it would hurt (or violate the rights of) someone else, you’re showing that you do believe the interests of separate people can be weighed against one another. Otherwise, you’d just do whatever was in your own interests all the time.