Close this search box.


Self-driving Cars and an Age-old Buddhist Moral Dilemma

Mural of King Sivi from the north wall of Mogao Cave No.254. Dunhuang, 439–534 CE, Northern Wei dynasty. Image Courtesy of the Dunhuang Academy
Mural of King Sivi from the north wall of Mogao Cave No.254. Dunhuang, 439–534 CE, N
orthern Wei dynasty. Image Courtesy of the Dunhuang Academy

This essay discusses the relevance of Buddhist emotional education, as depicted in Jataka tales such as King Sivi’s dilemma, for our technological world. Contemporary analytic philosophers, who often see emotions as obstacles to consequentialist moral reasoning, typically frame the ethics of self-driving cars through the thought experiment known as the “trolley problem.” This is a misconception because the trolley problem allows the thinker to assume the safe position of an onlooker, while the technology of self-driving cars denies us this false sense of safety and thrusts us as active participants into the moment, thereby transforming the trolley problem into an age-old karmic dilemma, for example the scenario faced by King Sivi. The relevance of the Jataka tales lies in the fact that Buddhist karmic thinking trains one to see one’s emotions as part of the bigger picture, and many pragmatic exercises have been devised to properly rewire one’s emotional responses for a larger purpose.

The trolley problem is widely considered to be central to the ethics of self-driving cars. But this is a misconception. The ethical issue thrust upon us by this cutting-edge technology, at its core, is a karmic dilemma: what should we do when faced with a choice between two courses of action that both lead to the harm of sentient beings? This dilemma is well articulated in the ancient Buddhist tale of King Sivi. In this story, King Sivi came across a dove trying to escape a hungry falcon. The king offered the dove safety, however without eating the dove, the falcon would die from starvation. What should King Sivi do?

At first glance, this karmic dilemma appears to be just like the trolley problem, the basic setup of which is as follows: there is a trolley running along a track toward five incapacitated people, who will die if they are hit. You are standing beside a lever that can switch the trolley onto another track with only a single incapacitated person. Do you pull the lever? 

It seems to me that King Sivi’s story differs from the trolley cases in one crucial aspect: King Sivi’s dilemma is a parable about emotional education, wherein King Sivi himself is implicated in the web of karmic consequences. The story as traditionally told provides us with emotionally wrenching details about how King Sivi overcomes his selfish impulses in order to save the dove without harming the falcon. In contrast, the trolley problem is about abstract moral reasoning based on the consequentialist calculations of an aloof onlooker. 

In analytic philosophy, emotion is often seen as an obstacle to rational decisions. Indeed, to disentangle moral reasoning from emotional responses, philosophers and moral psychologists have conjured up many variations of the trolley problem. One variation, namely, the case of the footbridge, resembles King Sivi’s dilemma: you are standing on a footbridge above the same trolley tracks with a very large person. If you push the person off the bridge, you can stop the trolley from killing the five people. Although the number of lives lost (1) and saved (5) is the same, it turns out that far more people would rather pull the lever than push the person. Many similar experiments with other variations prove what philosophers consider to be a sad truth about humanity: in numerous situations, human emotions trump moral reasoning.

The surprise is that Buddhist traditions long ago discovered this universal power of human emotions, and Buddhist literature has long employed the power of storytelling—and the emotions they evoke—to address similar moral dilemmas. The best-known collection is the Jataka tales, in which the Buddhist tradition narrates how, before the Buddha was born as Prince Siddhartha Gautama, he lived through many different incarnations and lifetimes. The Jataka tales tell of the Buddha’s previous lives as characters devoted to selflessness and sacrifice. When faced with two harmful alternatives, the Siddhartha-to-be simply inserted himself into the scene, offering his own flesh and body to avoid bringing harm to others. In his past life as King Sivi, so the story goes, when faced with the dying dove and the starving falcon, he decided to offer his own flesh in equal weight to the dove. He then had a butcher slice flesh from his leg and place it on a scale to weigh against the dove, yet despite cutting off more and more flesh, it was (miraculously) never equal in weight. Finally, King Sivi placed himself on the scale, offering his entire self in return for the dove’s life.

How does this ancient moral dilemma resemble the core of the ethics of self-driving cars? In his widely read book 21 Lessons for the 21st Century, historian Yuval Noah Harari cites a study on this topic, which he terms the “the Tesla Altruist vs. the Tesla Egoist”:

In a pioneering 2015 study, people were presented with a hypothetical scenario of a self-driving car about to run over several pedestrians. Most said that in such a case the car should save the pedestrians even at the price of killing its owner. When they were then asked whether they personally would buy a car programmed to sacrifice its owner for the greater good, most said no. For themselves, they would prefer the Tesla Egoist.

Like many other scholars, Harari frames the ethics of self-driving cars as another trolley scenario. However, there is a crucial difference between the trolley problem and the Jataka tale: in the trolley problem, one takes the position of a detached onlooker, playing the role of a god deciding who gets to die. The one making the decision rarely suffers physical harm in any sense, thereby making it a purely consequentialist calculation. In the first part of the 2015 study, when presented with the hypothetical question of whether one should program a car to kill the hypothetical owner to avoid killing innocent pedestrians, most choose the Tesla Altruist. This is indeed a variation of the trolley problem.

However, when asked if one would personally buy a Tesla Altruist, the reader is forced to insert themselves into the moment of the crash, transforming the trolley problem into King Sivi’s dilemma. Put another way, the key difference is a difference in attitude: moral distancing in the trolley cases in contrast to karmic embeddedness in the Jataka tale. The universal application of the technology denies us the false sense of safety as an onlooker and thrusts us into the dilemma faced by King Sivi. 

Here we see the relevance of Buddhist emotional education for our technological world. The renowned Harvard psychologist Joshua Greene, an expert on the trolley problem, once cited a study by undergraduate researcher Xin Xiang titled “Would the Buddha Push the Man off the Footbridge? Systematic Variations in the Moral Judgment and Punishment Tendencies of the Han Chinese, Tibetans, and Americans.” In his study, Xiang administered the footbridge to practicing Buddhist monks near the city of Lhasa. He then compared their answers with those of ordinary Han Chinese and American respondents. The study returned a surprising result: the Tibetan monks were overwhelmingly more likely to say that it was OK to push the large person off the footbridge, “similar to the behaviors of psychopaths—clinically defined—and people with damage to a specific part of the brain called the ventral medial prefrontal cortex.” (The Atlantic)

Xiang’s study raises important questions about Buddhist ethics: why would “compassionate” monks think like psychopaths? Greene’s explanation of this apparent anomaly is that Tibetan monks have a different kind of moral reasoning: “But I think the Buddhist monks were doing something very different,” he says. “When they gave that response, they said, ‘Of course, killing somebody is a terrible thing to do, but if your intention is pure and you are really doing it for the greater good, and you’re not doing it for yourself or your family, then that could be justified.’” 

I think Greene’s framing of the monks’ response makes it sound more like a consequentialist calculation than it actually is. Those who are familiar with Buddhist karmic theories would immediately recognize the karmic reasoning underpinning the monks’ narration. The key point is pure intention, the magic mental quality that guards one against karmic retribution. In the Buddhist cosmos, the complex web of karmic consequences wraps everyone together into one. No one can be just an onlooker. Everyone is trapped in this web. Every thought, every word, and every action generates ripples in this web. When examining karmic consequences, Buddhism maintains that the intention behind the action is what matters the most. In Mahayana training, pure intention means the altruistic and salvific motive—cultivating compassion for the sole purpose of leading all toward liberation. When the monks examine their own intentions, as they have been trained to do, they think of themselves as sentient beings ensnared in the karmic web, and pure intention transforms the troublesome behavior of killing one life into a pure action of saving five lives. 

The consequentialist calculation—harming five is worse than harming one—is the same. The mental attitude, however, is completely different. These Tibetan monks are fully aware of the karmic consequences of their own choices, but they are able to transcend the hardwired emotional circuit through a mental state of altruistic intention. 

Herein lies the power of Buddhist emotional education. In the Jataka tales, all moral inquiries are framed as emotional cultivation. The training of no-self, especially meditation on the foulness of the body, short-circuits our emotional attachments, whereby we preserve our body at all costs, something hardwired into our genes. King Sivi’s heart-wrenching self-sacrifice inspires the reader to emulate the bodhisattvas and sacrifice their own bodies, understood as a mere bag of dung, for the greater good. In other words, well-trained Buddhists respond differently because of their familiarity with the bodhisattva ideal. This ideal emotionally prepares them to think karmically as a moral actor embedded in an inextricably linked karmic web instead of distantly as an onlooker suffering no karmic consequences. 

Perhaps a follow-up experiment should be conducted, comparing the cases of the Tesla Altruist and the Tesla Egoist administered to Tibetan monks and lay people in China and the United States. Or, maybe we should ask the monks whether they would jump off the bridge themselves to save others. I personally would not be surprised to see a significant difference. 

Unlike animals, whose social traits are hardwired in their genes, numerous human social behaviors are malleable. Religions have developed strategies to control and manipulate certain hardwired social traits and to inculcate new social traits for a larger purpose. 

The Jataka tales are one such instrument of emotional cultivation, effective in deactivating selfish circuits and turning on the bodhisattva affect. When technologies force us to think like King Sivi, maybe it is time to make use of the Buddhist treasure trove of karmic tales for pragmatic examples that might help us address these new moral conflicts and develop solutions to the not-exactly-new problems that plague the world.

Jessica Zu is a currently finishing her second PhD in the Religion Department at Princeton University. Her research examines the surprising rise of an extreme form of Buddhist idealism—Yogacara (the school of consciousness-only)—at a time when scientific realism, social Darwinism, and capitalist materialism became dominant in early 20th century China. Her first Ph.D. was in theoretical physics. She is a long-term meditator.

See more

Karma, Science, and a Just Society: Yogācāra Causal Theory as Social Philosophy (YouTube)
LECTURE: Karma, Science, and a Just Society: Yogācāra Causal Theory as a Social Philosophy (H-Net: Humanities & Social Sciences Online)
If Buddhist Monks Trained AI (The Atlantic)

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Notify of
Inline Feedbacks
View all comments