Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Politics
Phil Torres

Elon Musk sees our future in the stars

Elon Musk BRENDAN SMIALOWSKI/AFP via Getty Images

Elon Musk, the richest person on the planet, has apparently struck a deal to buy Twitter, by all accounts "one of the world's most influential platforms." Many people are trying to understand why: what exactly is motivating Elon Musk? Is it just a matter of (his hypocritical notion of) free speech? Are there deeper reasons at play here? In truth, virtually no one in the popular press has gotten the right answer. I will try to provide that here.

Let's begin with an uncontroversial observation: Elon Musk does not care much about others, you and me, or even his employees. As his brother Kimbal Musk told Time magazine, "his gift is not empathy with people," after which the article notes that "during the COVID-19 pandemic, [Musk] made statements downplaying the virus, [broke] local health regulations to keep his factories running, and amplified skepticism about vaccine safety."

Nonetheless, Elon Musk sees himself as a leading philanthropist. "SpaceX, Tesla, Neuralink, The Boring Company are philanthropy," he insists. "If you say philanthropy is love of humanity, they are philanthropy." How so?

RELATED: The cult of Elon Musk: Why do some of us worship billionaires?

The only answer that makes sense comes from a worldview that I have elsewhere described as "one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about." I am referring to longtermism. This originated in Silicon Valley and at the elite British universities of Oxford and Cambridge, and has a large following within the so-called LessWrong or Rationalist community, whose most high-profile member is Peter Thiel, the billionaire entrepreneur and Trump supporter.

"Longtermists" like Nick Bostrom imagine a future in which trillions of human beings lead "happy lives" inside vast computer simulations powered by the energy output of stars.

In brief, the longtermists claim that if humanity can survive the next few centuries and successfully colonize outer space, the number of people who could exist in the future is absolutely enormous. According to the "father of Longtermism," Nick Bostrom, there could be something like 10^58 human beings in the future, although most of them would be living "happy lives" inside vast computer simulations powered by nanotechnological systems designed to capture all or most of the energy output of stars. (Why Bostrom feels confident that all these people would be "happy" in their simulated lives is not clear. Maybe they would take digital Prozac or something?) Other longtermists, such as Hilary Greaves and Will MacAskill, calculate that there could be 10^45 happy people in computer simulations within our Milky Way galaxy alone. That's a whole lot of people, and longtermists think you should be very impressed.

But here's the point these people are making, in terms of present-day social policy: Let's say you can do something today that positively affects just 0.000000000000000000000000000000000000000000001% of the 10^58 people who will be "living" at some point in the distant future. That means, mathematically, that you'd affect 10 trillion people. Now consider that there are roughly 8 billion people on the planet today. So the question is: If you want to do "the most good," should you focus on helping people who are alive right now or these vast numbers of possible people living in computer simulations in the far future? The answer is, of course, that you should focus on these far-future digital beings. As longtermist Benjamin Todd writes:

Since the future is big, there could be far more people in the future than in the present generation. This means that if you want to help people in general, your key concern shouldn't be to help the present generation, but to ensure that the future goes well in the long-term.

So why is Musk spending $44 billion or so to buy Twitter, after dangling and then withdrawing the $6.6 billion needed "to feed more than 40 million people across 43 countries that are 'on the brink of famine'"? Perhaps you can glimpse the answer: If you think that "the future is big," in Todd's words, and that huge numbers of future people in vast computer simulations will come into existence over the next billion years, then you should focus on them rather than those alive today. As Greaves and MacAskill argue, when assessing whether current actions are good or bad, we should focus not on their immediate effects, but on their effects a century or millennium into the future!


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


This doesn't mean we should entirely neglect current problems, as the longtermists would certainly tell us, but in their view we should help contemporary people only insofar as doing so will ensure that these future people will exist. This is not unlike the logic that leads corporations to care about their employees' mental health. For corporations, people are not valuable as ends in themselves. Instead, good mental health matters because it is conducive to maximizing profit, since healthy people tend to be more productive. Corporations care about people insofar as doing so benefits them.

For longtermists, morality and economics are almost indistinguishable: Both are numbers games that aim to maximize something. In the case of businesses, you want to maximize profit, while in the case of morality, you want to maximize "happy people." It's basically ethics as capitalism.

Musk has explicitly said that buying Twitter is about "the future of civilization." That points to his peculiar notion of philanthropy and the notion that no matter how obnoxious, puerile, inappropriate or petty his behavior — no matter how destructive or embarrassing his actions may be in the present — by aiming to influence the long-term future, he stands a chance of being considered by all those happy people in future computer simulations as having done more good, overall, than any single person in human history so far. Step aside, Mahatma Gandhi, Mother Teresa and Martin Luther King Jr. 

Why does Musk care about climate change? Not because of injustice, inequality or human suffering — but because it might snuff us out before we can colonize Mars and spread throughout the universe.

If you wonder why Musk wants to colonize Mars, this framework offers an answer: Because Mars is a planetary stepping-stone to the rest of the universe. Why does he want to plug our brains into computers via neural chips? Because this could "jump-start the next stage of human evolution." Why does he want to fix climate change? Is it because of all the harm it's causing (and will cause) for poor people in the Global South? Is it because of the injustice and inequality made worse by the climate crisis? Apparently not: It's because Musk doesn't want to risk a "runaway" climate change scenario that could snuff out human life before we've had a chance to colonize Mars, spread to the rest of the universe, and fulfill our "vast and glorious" potential — to quote longtermist Toby Ord. Earlier this year, Musk declared that "we should be much more worried about population collapse" than overpopulation. Why? Because "if there aren't enough people for Earth, then there definitely won't be enough for Mars."

There is a reason that Musk is on the scientific advisory board of the grandiosely named Future of Life Institute (FLI), to which he has donated millions of dollars. It's the same reason why he has donated similar sums to Bostrom's Future of Humanity Institute (Oxford) and the Centre for the Study of Existential Risk (Cambridge), that he holds a position on the scientific advisory board of the Centre for the Study of Existential Risk, and likes to talk about us living in a computer simulation and how superintelligent machines pose a "fundamental existential risk for human civilization."

By definition, an existential risk is any event that would prevent humanity from completely subjugating nature and maximizing economic productivity, both of which are seen as important by longtermists because they would enable us to develop advanced technologies and colonize space so that we can create as many happy people in simulations as physically possible. (Again, this is capitalism on steroids.) Bostrom, whom Elon Musk admires, introduced this term in the early 2000s, and it has become one of the central research topics of the "Effective Altruism" movement, which currently boasts of some $46.1 billion in committed funding and has representatives in high-level U.S. government positions (such as Jason Matheny). Reducing "existential risk" is one of the main objectives of longtermists, many of whom are also Effective Altruists.

From this perspective, the best way to be philanthropic is to not worry so much about the lives of present-day humans, except — once again — insofar as doing so will help us realize this techno-utopian future among the stars. Bostrom has described the worst atrocities in human history, including World War II and the Holocaust, as "mere ripples on the surface of the great sea of life. They haven't significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species." 

Leading longtermists say we shouldn't "fritter away" altruistic energy on "feel-good projects" like world hunger, systemic racism or women's rights. Saving the lives of people in rich countries is "substantially more important."

More recently, Bostrom has said that "unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy," such as helping the poor, solving world hunger, promoting LGBTQ rights and women's equality, fighting racism, eliminating factory farming and so on. He continued: "If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy" [emphasis added]. In a 2019 paper, he suggested that we should seriously consider implementing a centralized, invasive, global surveillance system to protect human civilization from terrorists.

Indeed, another leading longtermist and Effective Altruist, Nick Beckstead, wrote in his much-cited-by-other-longtermists dissertation that since the future could be so large, and since people in rich countries are better positioned to influence the long-term future than people in poor countries, it makes sense to prioritize the lives of the former over the lives of the latter. In his words:

saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. [Consequently,] it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

When one examines Elon Musk's behavior through the lens of longtermism, his decisions and actions make perfect sense. Sure, he makes misogynistic jokes, falsely accuses people of pedophilia, rails against pronouns and trans people, and spreads COVID misinformation. Yes, he exchanged messages with Jeffrey Epstein after Epstein pleaded guilty to sex trafficking minors, joked that he thought Bernie Sanders was dead, mocked support for the Ukrainian people and so on. (See here for a nauseating list.)

But the future may very well be disproportionately shaped by Musk's decisions — which are made unilaterally, with zero democratic influence — and since the future could be enormous if we colonize space, all the good that will come to exist (in the reckoning of longtermists) will dwarf all the bad that he may have done during his lifetime. The ends justify the means, in this calculus, and when the ends are literally astronomical value in some techno-utopian future world full of 10^58 happy people living in computer simulations powered by all the stars in the Virgo Supercluster, you can be the worst person in the world during your lifetime and still become the best person who ever existed in the grand scheme of things.

Elon Musk wants power. This is obvious. He's an egomaniac. But he also subscribes, so far as I can tell, to a big-picture view of humanity's spacefaring future and a morality-as-economics framework that explains, better than any of the alternatives, his actions. As I have noted elsewhere:

[Longtermism is] akin to a secular religion built around the worship of "future value," complete with its own "secularised doctrine of salvation," as the Future of Humanity Institute historian Thomas Moynihan approvingly writes in his book "X-Risk." The popularity of this religion among wealthy people in the West — especially the socioeconomic elite — makes sense because it tells them exactly what they want to hear: not only are you ethically excused from worrying too much about sub-existential threats like non-runaway climate change and global poverty, but you are actually a morally better person for focusing instead on more important things — risk that could permanently destroy "our potential" as a species of Earth-originating intelligent life.

It is deeply troubling that a single human being has so much power to determine the future course of human civilization on Earth. Oligarchy and democracy are incompatible, and we increasingly live in a world controlled in every important way by unaccountable, irresponsible, avaricious multi-billionaires. Even more worrisome than Elon Musk wanting to buy Twitter is his motivation: the longtermist vision of value, morality and the future. Indeed, whether or not the deal actually goes through — and there are hints that it might not — you should expect more power-grabs like this to come, not just from Musk but others under the spell of this intoxicating new secular religion.

Read more on Elon Musk's Twitter-quest:

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.