It’s an unfortunate thing that the fiery collapse of the crypto exchange FTX is giving many people their first look at one of the most promising charitable movements of the last decade. It’s called effective altruism, or EA, and in recent years, disgraced FTX founder Sam Bankman-Fried, known as SBF, has arguably been the most famous — or maybe just the richest — effective altruist in the world.
That Bankman-Fried is now hiding in the Bahamas as charges of deception, fraud and outright theft pile up, many are speculating that he’s going to take the movement down with him. New York Magazine writer Simon van Zuylen-Wood wonders if EA is now “defective.” New York Times columnist Ross Douthat, joked on Twitter that it’s “over.” And Vox journalist Kelsey Piper’s interview with SBF suggests that all his charitable work was ultimately just a scam.
But here’s the problem with that speculation. Bankman-Fried is not an effective altruist.
I’ve been teaching and practicing EA for years, and over the past couple of weeks, I’ve noticed that there’s a fair amount of misunderstanding around the term. One reporter said that effective altruism is about seeking “the greatest good for the greatest number.” It’s not. That’s utilitarianism. A columnist at news site CoinDesk wrote, “Effective altruists believe that making a lot of money to influence the world is a noble goal as long as you’re very, very smart.” Wrong again. David Wallace-Wells of The New York Times, got closer, calling it “a movement devoted to doing maximal good in the world.” Better, but a little gnomic.
So let me clear the air. EA boils down to two central claims. Here they are:
1. There is ameliorable suffering in the world. If you have disposable income — as many in developed countries do — you should use it to stop that suffering. The foundation for this argument is Peter Singer’s famous drowning child thought experiment.
2. If you go ahead and donate money, do so effectively. That is, give to causes that work to eliminate as much suffering as possible. I put this argument in really simple terms for my students: Please don’t give to Alex Rodriguez, also known as A-Rod. Like many other athletes, the former Yankees star began a charity at the height of his fame, and donations flooded in. But later reporting by the Boston Globe revealed that the foundation gave barely 1% of received funds to listed charities and was promptly stripped of its tax-exempt status. A-Rod’s foundation is the epitome of ineffective giving, and donors might as well have dumped their money into the East River.
So where should we donate? EA sites such as GiveWell make it really easy to find charities that do exponentially more good than A-Rod’s — and provide tons of great research to back up its recommendations.
I’ve come to believe that these two claims are not only compelling; they’re also true. And their strength is in their simplicity.
Understandably, there are many healthy debates about what effective giving might look like, and these debates give rise to some fascinating but less plausible second-order arguments. However, I’ve come to believe that these arguments are distractions that do more harm than good. I call them “Crazy EA,” not because they’re so wrong, but because when I share them with friends and students, they inevitably say, “Whoa, that’s crazy.”
Let me give you some examples: Crazy EA says that the noblest thing you can do is work for Goldman Sachs. Crazy EA says that you should probably donate one of your kidneys — right now. Crazy EA argues that the suffering of a pig is as bad as the suffering of a human child. And Crazy EA is really worried about an artificial intelligence uprising.
Very smart people, notably Singer himself, William MacAskill and Nick Bostrom, support some of these outlandish-seeming claims in very compelling ways. But as I argued in an October op-ed in the Chicago Tribune, when they do, Crazy EA just becomes EA. Which is exactly what it did with Bankman-Fried.
According to EA lore, Bankman-Fried came out of graduate school looking to give to animal charities. After an encounter with MacAskill, he decided that the best way to support such causes was not to work for an animal charity but to make piles of money that he could then donate. (Effective altruists call this “earning to give.”) That sent him to the world of crypto, where he amassed a huge but unstable fortune that he splashily promised to give away. But as his engagement with EA developed, he became less interested in animal causes and more interested in “longtermism,” an idiosyncratic offshoot of EA that tries to stave off future threats to humanity — such as an AI uprising. Indeed, recent reporting in The New York Times suggests that he and his colleagues had given upward of a half-billion dollars to avert a computer takeover. Crazy, right?
In sum, Bankman-Fried is a messy heap of the most questionable EA assertions. If and when he goes down, I hope he takes some of them with him.
But his fall shouldn’t discredit effective altruists. Because he’s not one.
____
ABOUT THE WRITER
Joshua Pederson is an associate professor of humanities at Boston University and the author of “Sin Sick: Moral Injury in War and Literature.”