x

FEATURES

Buddhistdoor View: Hope, Now and in the Future: A Buddhist Look at Doomerism and Longtermism 

Without technology humanity has no future, but we have to be careful that we don’t become so mechanized that we lose our human feelings.

— His Holiness the Dalai Lama

Every society believes that it is living in the worst of times. Our globalized community in 2022, amid an ongoing pandemic and seemingly unprecedented environmental and political crises, is no exception. Recent years have been particularly turbulent. Social media, which amplifies headline-grabbing stories of fear and danger, and “doomscrolling”—compulsive consumption of the bad news that appears on our social media feeds—do not help. The result of this continuous, unmoderated exposure to negative news can result in a psychological sense of pessimism that has come to be known as “doomerism.”

This dark perspective (those affected are known as “doomers,” a spin on the “Zoomer” and “Boomer” generational monikers) argues that there is no point in trying to better oneself or the world—the problems facing humanity have become insurmountable to the point that giving up is wisdom and attempting to make a positive difference is delusional.

People of all ages understandably feel a range of emotions, from rage to sadness and helplessness at witnessing the unfolding crises before us, especially the climate crisis. Empathy and a willingness for society to take their worry and despair seriously are critical, yet is relatively straightforward to build a case against doomerism. The exercise of mindfulness here is critical. As highly biased creatures, we need to manage our biases by acknowledging how unmindfully consuming media can shift our perceptions. Failing to do so “gives our panic-prone primate brains more reasons to feel stressed and more examples of the present to compare with our highly edited version of the past. When we are mindful of our thought patterns, we can take control of them, and give ourselves a reality check.” (National Geographic)

Indeed, “the worst year to be alive” has been a long and unwinnable debate for decades, if not longer, and 2022 is unlikely to be the victor. There is even a year that some consider, at least in Europe, to have been the worst: 536 CE, when a mysterious “fog” enveloped Europe, the Middle East, and parts of Asia for some 18 months: “Temperatures in the summer of 536 fell 1.5–2.5 degrees Celsius, initiating the coldest decade in the past 2,300 years. Snow fell that summer in China; crops failed; people starved,” and the collapse of the Eastern Roman Empire, a superpower in its day, was hastened. (National Geographic)

Beyond our everyday blues and depressions, however, doomerism is able to maintain a hold over people’s mindsets and neuroses because, on the whole, things for the planet are genuinely bad. We see it in our increasingly severe environmental problems, in the geopolitical fissures that are increasing tensions between superpowers, and the general decay of happiness and fulfillment seen in societies across the world.

Furthermore, doomerism can become almost its own ideology, or morph into a different, more sophisticated kind with similar premises but very different conclusions. One such philosophy, rooted in doomerism, has been gaining traction among some of the wealthiest and most powerful people in the world, including influential authors and think-tanks, and is making waves in the Western philosophical community. It is also becoming increasingly well-funded. The somewhat clumsy label for this futurist philosophy is “longtermism.”

In its most positive expression, longtermism takes our sense of doom seriously and argues that because humanity has barely begun to evolve, technologically, socially, and spiritually, we should strive to prepare the distant future for these eventual scenarios. Those yet unborn are a key moral priority—the people to come in the successive centuries and even millennia in which it is hoped that humanity will continue to thrive. Such a reach into the distant future requires that we examine seriously what we can do now to ensure their prosperity—for example, as philosophy professor William MacAskill writes, by exploring how AI might guide humanity and neutralize bad political actors. Addressing the many potential causes of human extinction, including pandemics, is also given considerable philosophical, practical, and technological attention. After all, “If our actions this century cause us to go extinct, there is little uncertainty about the impact on future generations: our actions will have robbed them of their chance to live altogether. The potential for a flourishing future would be entirely lost.” (BBC)

Elon Musk. From teslarati.com

Longtermism must be taken seriously, if only because it has powerful supporters and is becoming more influential than the crude, unmindful ideology of doomerism could ever hope to be. Billionaire Elon Musk has publicly expressed support for the writings of Nick Bostrom, one of the chief thinkers of longtermism and a founder of the Future of Humanity Institute (FHI). Among many other well-funded longtermist groups, the FHI was cofounded by multimillionaire tech entrepreneur Jaan Tallinn, who, according to anti-longtermism writer Émile P. Torres, “doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology.” (Aeon) Thus viewed by longtermists, climate change is considered a relative blip in terms of its threat level to human life.

This claim, among many others made of climate change, must give one pause. While it might seem common sense to view the future as a matter of moral priority, the assertions made by longtermism’s supporters can be tantamount to abandoning or jeopardizing the good that we are morally called upon to do in the present in order to protect some distant, notional future centuries down the line. Furthermore, a reliance on technology as the means to manifest that distant good is actually not as original as some longtermists seem to think, and has been in contention among ethicists for decades. There are echoes here of the darker side of utilitarianism (more colloquially, “the end justifies the means”). As Torres warns:

Not only could its ‘fanatical’ emphasis on fulfilling our longterm potential lead people to, eg, neglect non-existential climate change, prioritise the rich over the poor and perhaps even ‘justify’ pre-emptive violence and atrocities for the ‘greater cosmic good’ but it also contains within it the very tendencies – Baconianism, capitalism and value-neutrality – that have driven humanity inches away from the precipice of destruction. . . .

Longtermism tells us to maximise economic productivity, our control over nature, our presence in the Universe, the number of (simulated) people who exist in the future, the total amount of impersonal ‘value’ and so on. But to maximise, we must develop increasingly powerful – and dangerous – technologies; failing to do this would itself be an existential catastrophe.

(Aeon)

In an ironic twist, the demands of longtermism—at least on the grand, planetary scale that its supporters envision—can likely be achieved only by highly centralized bodies of authority with the technological clout and capacity for mass mobilization. It demands leaders who retain an almost Victorian or Marxist faith in the ability of governments to shape the future of nations and people at a basic, materialist level. This faith has arguably been long abandoned by the political administrations of the liberal and postmodern West, yet it is retained and applied regularly by authoritarian governments such as the Communist Party of China. The values that the Chinese government has recently attempted to promote, namely China’s three traditional spiritual traditions Confucianism, Daoism, and Buddhism, take on an even more momentous significance in this context. Where is the spiritual focus in the longtermist body of thought?

Although longtermism is a complex philosophy that has emerged only relatively recently, it looks further ahead than most: the colonization of other planets, transhumanism, and the development of AI are all consistent themes of its advocates and detractors. We therefore arrive at a crossroads in how we understand ourselves as self-reflective beings and the only species capable of spiritual awakening.

From forbes.com

For now, we could perhaps say that doomerism and certain manifestations of longtermism fail to account for the hope and good found in the present moment, the immediate now that is our clearest reality. The future, in the Buddhist vision, is karmically ordered. If we do things that are unwholesome in the present, we are very unlikely to enjoy wholesome karmic fruition. Furthermore, without the profound insight of an arhat, buddha, or bodhisattva, we simply cannot see into our future lives. We therefore cannot, and should not, presume to be capable of predicting how humanity will look in a distant tomorrow.

Certain trains of thought in longtermism seem to trivialize the individual human experience; the stories of hope, love, joy, despair, failure, and tragedy that constitute the tapestry of our time on Earth and that make our lives worthy of living and reflection. In other words, in its rush to save humanity, longtermism runs the risk of denying the very essence of humanity.

In the race to avert self-destruction, we as humans should not lose our outlets for self-understanding. And as teachers of wisdom such as the Dalai Lama have so insightfully conveyed, our human feelings, our consciences, are the only real tools we have for looking inward and examining ourselves. When considering the future, we cannot avoid contemplating the place of hope. In doomerism, hope is trashed as a moral ill or delusion. Longtermism stretches out hope toward the infinite horizon, placing faith in a faraway future that none of us will see. This sounds noble, but risks distorting the purpose of hope and our attention to the present. Roshi Joan Halifax expresses it wonderfully:

Wise hope is not seeing things unrealistically but rather seeing things as they are, including the truth of suffering—both its existence and our capacity to transform it. It’s when we realize we don’t know what will happen that this kind of hope comes alive; in that spaciousness of uncertainty is the very space we need to act.

(Lion’s Roar)

What, then, is “wise hope” for a hypothetical longtermist who might have Buddhist sympathies? Can they strike a balance between drawing up the calculations that may determine the fates of entire populations and the transition of human beings into post-human beings? When does technology redefine what it means to be homo sapiens, if at all? Is the future that no one can see, but that certain billionaires, scientists, and thinkers claim to be able to predict, truly more important than our crises in the present moment?

Regardless of how this debate unfolds in the context of these great questions, the late master Thich Nhat Hanh’s words from Being Peace (1987) take on an even deeper meaning as our human race contemplates the possibility of its self-inflicted destruction:

We tend to postpone being alive to the future, the distant future, we don’t know when. Now is not the moment to be alive. We may never be alive at all in our entire life. . . . This is the only moment that is real. To be here and now, and enjoy the present moment is our most wonderful task.

A final rumination: is the present moment the only time we are truly alive, as Thay said? Or are the longtermists right to factor in a future that stretches into billions and centuries of not-yet-here lives? Amid Roshi Halifax’s “spaciousness of uncertainty,” how should we meet the judgment, the call, to save the world?

See more

Why every year—but especially 2020—feels like the worst ever (National Geographic)
Why 536 was ‘the worst year to be alive’ (National Geographic)
Against longtermism (Aeon)
Yes, We Can Have Hope (Lion’s Roar)

Related features from BDG

Technology, Mind, and Dharma
What Plants Can Teach Us about Living During a Pandemic
The Deep, Dark, Dangerous Depths of Wrong View
Impermanence Is in Sight
Every Choice Is a Mistake

Related columns from BDG

Mindful Technology by Paola Di Maio
Bodhisattva 4.0 by John Harvey Negru

BDG Special issue 2022: Buddhism in a Divided World

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Subscribe
Notify of
guest
1 Comment
Oldest
Newest
Inline Feedbacks
View all comments
Trueev
Trueev
2 months ago

“When does technology redefine what it means to be homo sapiens, if at all?”

At the core of homo sapiens is unwisdom (“invisible” madness) and so the human label of “wise”/sapiens is a complete collective delusion — study the theory of The 2 Married Pink Elephants In The Historical Room

Once you understand that humans are “invisibly” insane you’ll UNDERSTAND (well, perhaps) why they perpetually come up with myths and lies about everything … including about themselves (their nature, their intelligence, their origins, etc).

(CAVEAT — only read the 2 pink elephant article if you’re GENUINELY interested in the truth and therefore “CAN handle the truth” …)