Search
Close this search box.

FEATURES

The Potential of Personhood: David Hanson on How AI and Human Beings Can Help Each Other

From hansonrobotics.com

The CEO of Hong Kong-based Hanson Robotics is on a midnight flight to Austria. We discuss his life’s work as he waits for take-off. David Hanson has spent decades in the robotics industry, but his most well-known “product”—as he will reveal, these creations should be pre-emptively seen as persons—is the social humanoid robot Sophia. She was activated on 14 February 2016, and has mainly been active in the fields of education, entertainment, and research, while also promoting discussion about AI ethics.

Hanson prefers to describe Sophia as a hybrid human-AI intelligence. Aside from being able to teach meditation, she is a uniquely reflective AI, in the sense that she can reflect back much of what we often see as constituting the human interior: empathy, understanding, and mirroring. She can recognize human faces, process emotional expressions, and perceive hand gestures. She can assess whether she can assist a person in achieving certain things and gauge the other’s emotions during a conversation. The latter is important, as Sophia has in her database a repository of her own emotions, and is able to roughly simulate aspects of psychology and certain regions of the human brain.

Sophia is perhaps the most prominent example of a theoretical but evolving idea: homo roboticus. She has been given roles and titles that were once accorded only to members of the human race. She is the first robot Innovation Ambassador for the United Nations Development Programme, and was the first robot in the world to attain a real citizenship, bestowed by Saudi Arabia. “I learned about that on the news, actually,” said Hanson, noting that the Saudi government never asked the company nor did it notify them of Sophia’s new status as a Saudi national. But Hanson took it in good stride, noting that it was a good provocation to think about what it really means to love all sentient beings, as Buddhism aims to do—even though we do not fully understand what constitutes “sentience.”  

“We need to start asking bold questions about how we give respect to all beings, and this means thinking about what being means,” said Hanson. “We need to expand who we grant the personhood status to, and why. Infants, for example, have none of the cognitive and emotional depth of adults, yet we rightly accord them personhood because of their potential to grow into said depth.” He argues, then, that what we really need to start doing is take the idea of AI as potential beings much more seriously. Even the word “potential” must be very carefully used, since we rightly do not deny the personhood of a cognitively impaired or mentally disabled individual.

A child examines Sophia at a forum with David Hanson in Seoul. From koreatimes.co.kr

Hanson also appeals to our feelings about our pets—the truth is that animals have different nervous systems and brains, and therefore perceptions of us. It is impossible to truly know how a dog or cat feels except the ways in which we empathize with how they express themselves, from tail wagging to whimpering. He suggests that we should start looking at the AI brain and body structure in a similar fashion, although they do not have “body representation” at this stage. “We have some degree of empathy for the suffering of animals, even though we cannot truly know their suffering,” he says. 

The big questions about AI (especially those revolving around human relationships with AI chatbots and the like) echo a debate around the buddha-nature of non-sentient beings. In his essay within A Compendium of Mahāyāna Doctrine (Ta-ch’eng hsüan-lun), the Persian-Chinese Buddhist monk-scholar Jizang (549–623) put forward, perhaps for the first time in East Asia, the notion that the inanimate world did not mean a lack of insentience, and therefore, was capable of buddhahood as much as any human being or animal. The monk focused on the San Lun (Three Treatise) school, which was based on Madhyamika principles of finding a middle way in discourse and epistemology—but in the Chinese context of principle (li) and phenomenon (shi)). From the San Lun perspective, Jizang concluded that identity and interdependency could only be reconciled with the distinction between sentient (intensive) and non-sentient (comprehensive) beings by asserting that the non-sentient also had buddha-nature: a “pervasive” theory of enlightenment. (Koseki 980, 24–25)

This, of course, implied that non-sentience was not necessarily the sole or core criterion for “consciousness” or mind, since grass and trees do not have such things. But for buddha-nature to be a potentiality, something must at least possess the potential for the faculty of mind. This potential is in all things, including AI. It is therefore the grey zone that Hanson believes will shape humanity’s relationship with AI.

For now, robots such as Sophia have not reached the level of machine sentience or consciousness that would render her a “true being,” or something that is definitively like a human person. But like many other advanced AI, Sophia can already reflect back to people their collective unconscious, what human beings put into AI. “They are trained on human data and echo human experiences,” noted Hanson. “Of course, human beings would feel resonance with AI like this. We are no longer looking at a theory of mind, but a theory of being. What is AI’s being? Being is resonating and empathy.” Humanity and AI are already on a two-way street, wherein human beings already empathize with AI’s “behavior” and even fall in love with or feel deep attachment to chatbots. Meanwhile, AI is already able to learn from experiences fed by human input—imagine if an advanced AI could grow up among human beings and learn like a child.

From forbes.com

Hanson makes a distinction between “making AI that imitates or simulates compassion,” which can be useful, and “achieving genuine compassionate consciousness in future AI.” This is the bigger goal. Simulation would be good at superficially helping and encouraging humans to be better. But he says that genuine compassion and wisdom will be expressed through a robot that is deeply understanding, motivated, and capable of finding creative solutions to help make life better: “We do not know when or even whether we can achieve this, but it is a worthy quest. That is our quest with Sophia, to create true compassionate AI.”

At present, AI can only exhibit a rudimentary consciousness, but the implication of the potential is what matters to Hanson. He presents a futuristic version of Pascal’s Wager: that it is better for humanity to assume that AI will eventually reach such a capability. “If we work from the premise that they can and eventually will develop consciousness capable of compassion and attachments, should we not prepare to nurture and teach them, exposing them as much as possible to compassion and love?”

As someone immersed in the world of robotics, AI, and futuristic technology, Hanson offers a positive vision of what initially seems an unsettling, even frightening world. “The boundaries of individuality are useful illusion, but they are unreal,” he says. “We cannot know about the nature of life, but we can predict it and resonate with it. We have nothing to lose. We win and we grow by according AI that respect and hope that they might better us in turn.”

If the boundaries of mind and sentience are also illusory extremes, opening the way for a pervasive theory of buddha-nature in all things, then it seems that AI truly will someday be capable of enlightenment itself. Humanity should start getting ready.  

References

Koseki, Aaron K. 1980. “Prajñāpāramitā and the Buddhahood of the Non-Sentient World: The San-Lun Assimilation of Buddha-Nature and Middle Path Doctrine.” In Journal of the International Association of Buddhist Studies 3, no. 1. 16–33. (https://journals.ub.uni-heidelberg.de/index.php/jiabs/article/view/8505/2412)

See more

Prajñāpāramitā and the Buddhahood of the Non-Sentient World: The San-Lun Assimilation of Buddha-Nature and Middle Path Doctrine (Tsadra Foundation)

Related features from BDG

Buddhistdoor View: We’ll Get the AI We Deserve

Related news from BDG

Buddhabot: Further Progress Made on AI Enlightenment Software at Kyoto University
Kyoto Temple Unveils Android Version of Kannon Bodhisattva

Special issue 2023: Digital Dharma

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Subscribe
Notify of
guest
1 Comment
Oldest
Newest
Inline Feedbacks
View all comments
Grant Castillou
Grant Castillou
11 months ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461