Freddie deBoer has recently published an attack on effective altruism, along with the utilitarianism which he believes undergirds it. Effective altruism (“EA”) is a movement that seeks to optimize the positive impact you can make via your career, donations, and other projects. EA has has close cultural ties to the rationalist movement, and close ties to utilitarianism—though notably there are effective altruists who are not utilitarians.
DeBoer writes dismissively about EA for several reasons. First, he thinks effective altruists tend to fixate on weird questions like insect sentience and whether we should eliminate all predators, and that the EA community wrongly cultivates these nerdy interests. Perhaps more importantly, he argues that there’s nothing special about trying to good effectively; after all, isn’t everyone else also trying to do this? To the extent that EA is special, deBoer contends that it’s not in its goal of effectiveness, but in the value system which motivates it, namely, utilitarianism.
In the comments, Scott Alexander offers an excellent reply to deBoer’s first criticism:
--EA: Does 99 boring things and 1 crazy thing
--FDB: Ignores all the boring things because they're boring (totally fair! we all do this!), then asks "Why is EA only doing crazy things and never the boring things?"
Quick example: GiveWell continues to direct ~$500 million/year, mostly to deworming, malaria nets, and cash grants to poor Africans.
GiveWell is ta prominent EA nonprofit which works to identify charities that achieve a great deal of good at a relatively low cost. The analysis that GiveWell does is both valuable and unique. Many people are interested in helping the global poor, and want to know what approaches will be effective. Will their money go farther in building wells or hospitals, in purchasing medicine or mosquito netting? It would be useful for somebody to run the numbers on these questions, and GiveWell does this in a pretty rigorous and transparent way.
This strikes some as cold-hearted; don’t we owe more to the people close to us than we do to people far away? But it doesn’t need to be an either-or proposition. If you recognize any real moral obligation to help the global poor, you will recognize an obligation to do so effectively. Instead of sending your money to whichever organization sends you brochures with the saddest photos on it, you will try to understand which organizations are well run. And I don’t just mean the organizations that have minimal overhead, but organizations that spend their money where it will make a big impact. For whatever chunk of money you have earmarked to be spent in this way, GiveWell has you covered: the organizations they recommend provide deworming services, malaria nets/medicine, cash incentives for routine childhood vaccination, vitamin A supplements, and cash transfers; the organizations in questions are efficiently run, and for a given amount of money are able to save—or significantly improve—an impressive number of lives.
DeBoer is right to attack utilitarianism—which is a mistaken philosophy—but he’s wrong to think that the EA movement’s close relationship with utilitarianism is proof that EA is just applied utilitarianism, or that it has nothing to offer to the rest of us. The fact that many of the folks in the EA movement are confused about certain things doesn’t mean they’re wrong about whether buying mosquito netting or building wells will save more lives per dollar. There are plenty of questions that effective altruism isn’t well-equipped to answer: whom should I marry? what is the greatest Nicolas Cage film? what sort of assistance do I owe to the homeless person on my block? The fact that GiveWell can’t help me with these questions doesn’t mean they have nothing to offer on the much narrower and more quantifiable question “how can my money go the farthest in saving lives?”
DeBoer argues that EA isn’t unique in its goal of doing good effectively, but in its values, namely utilitarianism. But this doesn’t seem right. What’s unique about the EA movement, particularly in organizations like GiveWell, are the tools or method it uses to assess the question of how we can do good effectively. And when you focus on a certain narrow subset of goods—saving lives and dramatically improving health outcomes in poor parts of the world—it doesn’t really matter whether the people doing the analysis are utilitarians or not; it’s just good that someone is doing it, and doing it rigorously. If you’ve decided to spend a certain amount of money providing medical care for impoverished people you’ve never met, it really is worth it to figure out which of available interventions will save the most lives.
You don’t need to be a utilitarian to think that this question is worth answering, and that these charities are worth supporting. It’s true that we have obligations to particular people in our lives—to our friends and family, to the poor people in our neighborhood and city—and that health and life are not the only goods that we need to pursue; there are also cultural, intellectual, aesthetic and spiritual goods that we should be pursuing and sharing. (I’m not a utilitarian!) But simply by virtue of being very poor, poor people who live far way from us can have real moral claims on us, at least when we have a real capacity to help them. And to the extent that we have a duty to help them we have a duty to help them effectively. Buying vitamin A supplements for poor children may not be your only moral obligation, but by doing so you can fulfill some of your real duties towards the poorest of the poor.
Christianity is decidedly not a utilitarian religion, but the Gospel emphasizes that our obligation to the poorest of the poor is very real. Consider the parable of the sheep and the goats (Mat 25:31-47): ‘Amen, I say to you, what you did not do for one of these least ones, you did not do for me.’ Or the parable of Lazarus and the rich man (Luke 16: 19-31):
There was a rich man who dressed in purple garments and fine linen and dined sumptuously each day. And lying at his door was a poor man named Lazarus, covered with sores, who would gladly have eaten his fill of the scraps that fell from the rich man’s table. Dogs even used to come and lick his sores.
When the poor man died, he was carried away by angels to the bosom of Abraham. The rich man also died and was buried, and from the netherworld, where he was in torment, he raised his eyes and saw Abraham far off and Lazarus at his side. And he cried out, ‘Father Abraham, have pity on me. Send Lazarus to dip the tip of his finger in water and cool my tongue, for I am suffering torment in these flames.’
Abraham replied, ‘My child, remember that you received what was good during your lifetime while Lazarus likewise received what was bad; but now he is comforted here, whereas you are tormented. Moreover, between us and you a great chasm is established to prevent anyone from crossing who might wish to go from our side to yours or from your side to ours.’
He said, ‘Then I beg you, father, send him to my father’s house, for I have five brothers, so that he may warn them, lest they too come to this place of torment.’
But Abraham replied, ‘They have Moses and the prophets. Let them listen to them.’
He said, ‘Oh no, father Abraham, but if someone from the dead goes to them, they will repent.’
Then Abraham said, ‘If they will not listen to Moses and the prophets, neither will they be persuaded if someone should rise from the dead.’”
It’s true that we have a particular obligation to the impoverished people we directly encounter, but we also have an obligation not to hide from and avoiding dealing with forms of poverty that are farther away.
There are obviously many things that are wrong with utilitarianism; the rationalist movement which gave rise to the EA movement also has its limitations. But the rationalist movement really has some things going for it; even if it doesn’t offer a lot of guidance on what we should believe, it offers some great warnings about how our reasoning can go wrong. For instance, in his canonical post “Reversed Stupidity is not Intelligence,” Eliazer Yudkoswsky points out that the mere fact that a person is crazy is not evidence that a given belief of theirs is false. If this were the case, it would mean that crazy people are much better than everyone else at identifying the truth, even if they do it backwards. Whereas in reality, crazy people actually just hold a mixture of true and false beliefs: if a crazy person tells me that today is Tuesday, this isn’t evidence that it’s some other day of the week.
Utilitarians are “crazy,” insofar as utilitarianism is false; but the fact that people who are crazy in this way get excited about EA doesn’t prove that EA is wrong or that GiveWell is a bad organization. It simply doesn’t tell us anything one way or the other. Instead, we need to assess GiveWell on its own merits. While not entirely perfect, it is an excellent organization for helping people see how they can effectively fulfill their very real obligation to the poor. And it’s hard to see how that’s a bad thing.