What We Owe the Future changed my thinking on two things.
1. The contingency of moral values. Previously, I believed the most important driver of moral progress in society was economic and technological progress: people become more willing to do the right thing as the sacrifices they have to make to do so become smaller. MacAskill looks at the abolition of slavery in the British empire as a case study for this claim, and notes it's not the dominant view among historians. I was surprised to learn of the economic cost Britain accepted:
In the years leading up to abolition, British colonies produced more sugar than the rest of the world combined, and Britain consumed the most sugar of any country. When slavery was abolished, the shelf price of sugar increased by about 50 percent, costing the British public £21 million over seven years--about 5 percent of British expenditure at the time.
The British government paid off British slave owners in order to pass the 1833 Slavery Abolition Act, which gradually freed the enslaved across most of the British Empire. This cost the British government £20 million, amounting to 40 percent of the Treasury's annual expenditure at the time.
MacAskill presents the case that abolition depended on a drastic change in moral beliefs, which in turn depended on both the long-term efforts of an initially small group of activists and a great deal of good luck. Relatively small deviations from our actual history could have resulted in slavery continuing to be viewed as morally acceptable now and indefinitely into the future. (And even if abolition were more or less inevitable, how long it took to happen might have been very contingent - perhaps slavery could have persisted for centuries longer.)
So, explicitly trying to change society's values can have enormous payoffs. Such efforts may also be less susceptible to one of the main objections that has generally made me skeptical of attempts to improve the long-term future - the unpredictability of long chains of cause and effect:
...from a longtermist perspective, [values changes] are particularly significant compared to other sorts of changes we might make because their effects are unusually predictable.
If you promote a particular means of achieving your goals, like a particular policy, you run the risk that the policy might not be very good at achieving your goal in the future, especially if the world in the future is very different from today, with a very different political, cultural, and technological environment. You might also lose out on the knowledge that we will gain in the future, which might change whether we even think that this policy is a good idea. In contrast, if you can ensure that people in the future adopt a particular goal, then you can trust them to pursue whatever strategies make the most sense, in whatever environment they are in and with whatever additional information they have.
2. The risk of technological stagnation. This is a concern I hadn't really been exposed to before:
...as we make technological progress, we pick the low-hanging fruit, and further progress inherently becomes harder and harder. So far, we've dealt with that by throwing more and more people at the problem. Compared to a few centuries ago, there are many, many, many more researchers, engineers, and inventors. But this trend is set to end: we simply can't keep increasing the share of the labour force put towards research and development, and the size of the global labour force is projected to peak and then start exponentially declining by the end of this century. In this situation, our best models of economic growth predict the pace of innovation will fall to zero and the level of technological advancement will plateau.
Based on a 2020 University of Washington study, MacAskill thinks population will actually decline, and notes that "[f]or twenty-three countries, including Thailand, Spain, and Japan, populations are projected to more than halve by 2100; China's population is projected to decline to 730 million over that time, down from over 1.4 billion currently." This seems pretty scary to me: intuitively, if there are fewer people to divide humanity's labor among, then each person has to take on more work and/or lower-priority work just won't get done. I find it easy to imagine society's willingness to fund speculative research declining in such a scenario.
Why should we care? Well, I like technology. But MacAskill gives some less biased reasons to worry, including the possibility that we are nearing an especially dangerous time to stagnate:
We are becoming capable of bioengineering pathogens, and in the worst case engineered pandemics could wipe us all out. And over the next century, in which technological progress will likely still continue, there's a good chance we will develop further, extremely potent means of destruction.
If we stagnate and stay stuck at an unsustainable level of technological advancement, we would remain in a risky period. Every year, we'd roll the dice on whether an engineered pandemic or some other cataclysm would occur, causing catastrophe or extinction. Sooner or later, one would. To safeguard civilisation, we need to get beyond this unsustainable state and develop technologies to defend against these risks.
Toward the end of the book, MacAskill suggests that one good way to help the future is simply to have children. It seems to me that if his argument is correct, trying increase fertility rates around the world could be an important line of research for longtermists to pursue. Speaking personally, the expense and (especially for young children) work effort involved in parenting scares me away from it - it feels like I'd have to accept both a dramatically higher stress level for many years, and a drastic reduction in the time I invest in other pursuits that I care about. If this is a common attitude, but declining fertility rates are a threat to society as a whole, then we should be looking for social innovations that make parenting significantly less burdensome.