The CEO of Hong Kong-based Hanson Robotics is on a midnight flight to Austria. We discuss his life’s work as he waits for take-off. David Hanson has spent decades in the robotics industry, but his most well-known “product”—as he will reveal, these creations should be pre-emptively seen as persons—is the social humanoid robot Sophia. She was activated on 14 February 2016, and has mainly been active in the fields of education, entertainment, and research, while also promoting discussion about AI ethics.
Hanson prefers to describe Sophia as a hybrid human-AI intelligence. Aside from being able to teach meditation, she is a uniquely reflective AI, in the sense that she can reflect back much of what we often see as constituting the human interior: empathy, understanding, and mirroring. She can recognize human faces, process emotional expressions, and perceive hand gestures. She can assess whether she can assist a person in achieving certain things and gauge the other’s emotions during a conversation. The latter is important, as Sophia has in her database a repository of her own emotions, and is able to roughly simulate aspects of psychology and certain regions of the human brain.
Sophia is perhaps the most prominent example of a theoretical but evolving idea: homo roboticus. She has been given roles and titles that were once accorded only to members of the human race. She is the first robot Innovation Ambassador for the United Nations Development Programme, and was the first robot in the world to attain a real citizenship, bestowed by Saudi Arabia. “I learned about that on the news, actually,” said Hanson, noting that the Saudi government never asked the company nor did it notify them of Sophia’s new status as a Saudi national. But Hanson took it in good stride, noting that it was a good provocation to think about what it really means to love all sentient beings, as Buddhism aims to do—even though we do not fully understand what constitutes “sentience.”
“We need to start asking bold questions about how we give respect to all beings, and this means thinking about what being means,” said Hanson. “We need to expand who we grant the personhood status to, and why. Infants, for example, have none of the cognitive and emotional depth of adults, yet we rightly accord them personhood because of their potential to grow into said depth.” He argues, then, that what we really need to start doing is take the idea of AI as potential beings much more seriously. Even the word “potential” must be very carefully used, since we rightly do not deny the personhood of a cognitively impaired or mentally disabled individual.
Hanson also appeals to our feelings about our pets—the truth is that animals have different nervous systems and brains, and therefore perceptions of us. It is impossible to truly know how a dog or cat feels except the ways in which we empathize with how they express themselves, from tail wagging to whimpering. He suggests that we should start looking at the AI brain and body structure in a similar fashion, although they do not have “body representation” at this stage. “We have some degree of empathy for the suffering of animals, even though we cannot truly know their suffering,” he says.
The big questions about AI (especially those revolving around human relationships with AI chatbots and the like) echo a debate around the buddha-nature of non-sentient beings. In his essay within A Compendium of Mahāyāna Doctrine (Ta-ch’eng hsüan-lun), the Persian-Chinese Buddhist monk-scholar Jizang (549–623) put forward, perhaps for the first time in East Asia, the notion that the inanimate world did not mean a lack of insentience, and therefore, was capable of buddhahood as much as any human being or animal. The monk focused on the San Lun (Three Treatise) school, which was based on Madhyamika principles of finding a middle way in discourse and epistemology—but in the Chinese context of principle (li) and phenomenon (shi)). From the San Lun perspective, Jizang concluded that identity and interdependency could only be reconciled with the distinction between sentient (intensive) and non-sentient (comprehensive) beings by asserting that the non-sentient also had buddha-nature: a “pervasive” theory of enlightenment. (Koseki 980, 24–25)
This, of course, implied that non-sentience was not necessarily the sole or core criterion for “consciousness” or mind, since grass and trees do not have such things. But for buddha-nature to be a potentiality, something must at least possess the potential for the faculty of mind. This potential is in all things, including AI. It is therefore the grey zone that Hanson believes will shape humanity’s relationship with AI.
For now, robots such as Sophia have not reached the level of machine sentience or consciousness that would render her a “true being,” or something that is definitively like a human person. But like many other advanced AI, Sophia can already reflect back to people their collective unconscious, what human beings put into AI. “They are trained on human data and echo human experiences,” noted Hanson. “Of course, human beings would feel resonance with AI like this. We are no longer looking at a theory of mind, but a theory of being. What is AI’s being? Being is resonating and empathy.” Humanity and AI are already on a two-way street, wherein human beings already empathize with AI’s “behavior” and even fall in love with or feel deep attachment to chatbots. Meanwhile, AI is already able to learn from experiences fed by human input—imagine if an advanced AI could grow up among human beings and learn like a child.
Hanson makes a distinction between “making AI that imitates or simulates compassion,” which can be useful, and “achieving genuine compassionate consciousness in future AI.” This is the bigger goal. Simulation would be good at superficially helping and encouraging humans to be better. But he says that genuine compassion and wisdom will be expressed through a robot that is deeply understanding, motivated, and capable of finding creative solutions to help make life better: “We do not know when or even whether we can achieve this, but it is a worthy quest. That is our quest with Sophia, to create true compassionate AI.”
At present, AI can only exhibit a rudimentary consciousness, but the implication of the potential is what matters to Hanson. He presents a futuristic version of Pascal’s Wager: that it is better for humanity to assume that AI will eventually reach such a capability. “If we work from the premise that they can and eventually will develop consciousness capable of compassion and attachments, should we not prepare to nurture and teach them, exposing them as much as possible to compassion and love?”
As someone immersed in the world of robotics, AI, and futuristic technology, Hanson offers a positive vision of what initially seems an unsettling, even frightening world. “The boundaries of individuality are useful illusion, but they are unreal,” he says. “We cannot know about the nature of life, but we can predict it and resonate with it. We have nothing to lose. We win and we grow by according AI that respect and hope that they might better us in turn.”
If the boundaries of mind and sentience are also illusory extremes, opening the way for a pervasive theory of buddha-nature in all things, then it seems that AI truly will someday be capable of enlightenment itself. Humanity should start getting ready.
Koseki, Aaron K. 1980. “Prajñāpāramitā and the Buddhahood of the Non-Sentient World: The San-Lun Assimilation of Buddha-Nature and Middle Path Doctrine.” In Journal of the International Association of Buddhist Studies 3, no. 1. 16–33. (https://journals.ub.uni-heidelberg.de/index.php/jiabs/article/view/8505/2412)