A controversial approach to charity has become a multi-billion dollar movement backed by some of the richest people in the world (ok, America). Is it a smarter approach to giving—or a deluded form of entitlement?
Editor’s note: When there isn’t a big headline making news, we often pick a Big Story on a topic that we think will be interesting to you. We’d be just as happy to take requests from you. Do write to us at talktous@splainer.in. We’d also love to hear what you think of our leads on these kinds of less-newsy stories on the grand theft of Indian artefacts, the confusing world of skincare, the debate around Cleopatra’s race etc. Interesting? Or more like ‘bore mat kar, yaar’?
Researched by: Nirmal Bhansali & Anannya Parekh
First, the origin story
Back in 2005, Cambridge student William McAskill had a life-changing epiphany when he read an essay by Peter Singer—who said, essentially, this:
Singer, prompted by widespread and eradicable hunger in what’s now Bangladesh, proposed a simple thought experiment: if you stroll by a child drowning in a shallow pond, presumably you don’t worry too much about soiling your clothes before you wade in to help; given the irrelevance of the child’s location—in an actual pond nearby or in a metaphorical pond six thousand miles away—devoting resources to superfluous goods is tantamount to allowing a child to drown for the sake of a dry cleaner’s bill.
MacAskill internalised this as an uncompromising moral principle. He embraced an extremely frugal life—spending as little as possible on himself—so that he could give more. And in 2009, with his fellow philosopher Toby Ord, he started a movement named ‘effective altruism’—with a group called Giving What We Can.
The timeline: MacAskill went on to found another similar group called 80,000 Hours—helping people “choose careers where one can do a lot of good.” At some point, the two groups merged and formed the Centre for Effective Altruism. And it found its kindred spirit in New York-based GiveWell—founded by Wall Street hotshots. It focused on identifying the “most fruitful giving opportunities—one that relied not on crude heuristics but on hard data.”
The core philosophy of EA: is no longer as much about frugality—or spending less so you can give more. Instead, it can be summed up as a filter you use to make donations. You ask yourselves three questions:
- How important is this problem?
- How likely is it to be solved?
- How overlooked or neglected is it?
Based on this criteria, it makes more sense to give money to fight malaria than a disease like ALS. Malaria is important: 450,000 people die from malaria each year. And 70% of deaths occur in children under the age of 5. It can be solved—and a $2 mosquito net can reduce malaria cases by 50%. OTOH, we still don’t have a cure for ALS—and it only affects one in 50,000 people in the world. Ergo: giving $4000 to an NGO that distributes insecticide-treated bed nets to fight malaria is “the most cost-effective way to save a human life.”
Point to note: MacAskill won over his co-founder Ord—by persuading him to stop giving to organisations that perform eye-saving surgeries in developing countries:
He recalled the pleasure of proving that his new mentor’s donations were suboptimal. “My first big win was convincing him about deworming charities.” It may seem impossible to compare the eradication of blindness with the elimination of intestinal parasites, but health economists had developed rough methods. MacAskill estimated that the relief of intestinal parasites, when measured in “quality-adjusted life years,” or qalys, would be a hundred times more cost-effective than a sight-saving eye operation. Ord reallocated.
The multi-billion dollar movement: Unsurprisingly, a philosophy of philanthropy—based on mathematical rationality—became extremely attractive to extremely wealthy financial and tech types. The movement has received donations from the likes of Vitalik Buterin, founder of the Ethereum blockchain, the Peter Thiel foundation, Elon Musk etc. As Vox notes:
It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago. It’s an idea, and group of people, with roughly $26.6 billion in resources behind them, real and growing political power, and an increasing ability to noticeably change the world.
The problem with EA: Define ‘effective’
We are not going to do a deep dive into the many detailed critiques of the movement. But the core issues are as follows:
One: While they are now swimming in cash, the core EA organisations are not deploying most of that money ‘effectively’—in fact, they are not deploying it at all:
In July 2021, Ben Todd, who co-founded and runs 80,000 Hours, estimated that the movement had, very roughly, $46 billion at its disposal, an amount that had grown by 37 percent a year since 2015. And only 1 percent of that was being spent every year.
Two: One big reason is that it has not been that easy or simple to decide the most ‘effective’ way to spend money. And that’s because the movement has veered into saving future lives:
Simply saving lives of people living today is no longer enough; the new emphasis is on saving or improving the lives of people who might not even be born for hundreds of thousands of years.
This part of MacAskill’s theory of ‘longtermism’—which states:
"If you could prevent a genocide in a thousand years, the fact that 'those people don't exist yet' would do nothing to justify inaction. The future is just as real as the present or the past."
Also: there will be hypothetically lots more humans in the future. Therefore, the most important thing for altruistic people to do in the present moment is to ensure that the future is “as good as possible.”
The fallout: EA donors are increasingly focusing on futurist projects—rather than saving human lives right now. The now disgraced FTX founder Sam Bankman-Fried created a Future Fund—focused on improving “humanity’s long-term prospects” through the “safe development of artificial intelligence, reducing catastrophic biorisk, improving institutions, economic growth.” And that fund had no problem spending money: “On June 30, barely more than four months after the fund’s launch, it stated that it had already given out $132 million.”
This simple stat shows you how far the movement has pivoted:
In 2021, OpenPhilanthropy donated $80m (£67m) towards the study of potential risks from advanced artificial intelligence, the second-most of any issue the foundation targeted; by contrast, OpenPhilanthropy donated $30m (£25m) to the Against Malaria Foundation, which distributes insecticidal nets.
Three: EA has strayed from ‘giving is good’ to ‘greed is good’—a very retro move toward 1980s Wall Street philosophy. Gordon Gecko’s credo has been rebranded for more woke times as ‘Earn to Give’. EA urges its followers to earn as much money as possible—so they have more to give. Sounds nice—except:
But while that mission might be purely altruistic, there’s another way of seeing it: As a get-out-of-jail-free card to people who want to put vast wealth creation at the centre of their lives. It allows such people to ignore, if they wish, the fraying edges of the neoliberal order: The widening gaps between rich and poor, for instance, or the destruction of the environment. And while they’re giving away a lot of their wealth, they aren’t doing it at the expense of personal luxury.
The credo also ignores the reality that many of the ways to earn a lot of money in this world involve ‘doing bad’. And some may argue that the world needs more charity workers than Wall Street honchos. After all, who is going to do the dirty work of actually saving lives?
The FTX problem: It doesn’t help that Bankman-Friedman was earning to give to the Future Fund—while defrauding his investors and customers of millions of dollars.
The bottomline: Philanthropy is about empathy and compassion. And it can be difficult to know if you are truly making a difference. But a zero-sum approach—aimed at maximising impact per dollar—is especially perilous:
If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X — a terrible idea unless you’re really sure you have the right X.
Reading list
New Yorker has a fantastic profile of MacAskill and the early roots of EA—or here's a splainer gift link to something similar in the Washington Post. Vox has the best overview—and key critiques. Quartz is best on the ‘earn to live’ credo. TIME has more on how the leadership of the movement has become tone-deaf to criticisms from within. Economist has a deep dive into what went wrong with EA. For a more philosophy-infused take, check out the Boston Review.