FEATURES

Defining Consciousness: How Buddhism Can Inform AI

Image by Sophie Rogers
Image by Sophie Rogers

Technology is growing at a rate quicker than most of us can comprehend. Whether we agree with such a trend or not, the influence of technology in society is moving forward, and quickly. Where we go from here and how we handle our own inventions comes with a certain responsibility that computer scientists cannot handle alone. Leaders in science, philosophy, and religion need to work together to ensure that all people will benefit from the exponential growth of technology. When it comes to artificial intelligence (AI), Buddhism plays an especially interesting role.

As mentioned in part one of my series in this special issue, artificial intelligence (AI) is a type of technology that performs tasks related to human-intelligence including learning, deducting, and reasoning. Take, for example, Facebook’s ability to curate content based off your browsing behavior. AI systems are programmed to continuously self-improve by using massive amounts of data and acquiring knowledge through experience of what works. The standard feature of voice-to-text on smartphones used to be impossible for the most advanced computers. Now, large tech companies have developed speech-recognition systems that can transcribe conversations with more accuracy than humans can. (Emerj)

It is hard to distinguish what makes us unique as human beings if machines can master these actions. The success of AI addresses the biggest questions of our existence: What is the mind? Is it purely a machine-like system of information processing deemed “intelligence?” Or is it something deeply complex, such as the ability to emotionally react to specific circumstances and connect with another individual?

The philosophy and practices of Buddhism directly relate to AI. As a study of the mind, Buddhism speaks to highly relevant topics in AI, including consciousness and identity. In light of both the potentially positive and disastrous outcomes of AI, Buddhism can and should continue to serve as a grounding ethical, informing force. Not only does it provide a comprehensive framework for comprehending a “smart” computer, but its philosophy can provide clarity for those experiencing data overload.

Can computers have consciousness? Does consciousness matter?

At what point is an artificial intelligence considered conscious? It is hard to answer this question because there is no universal consensus on what consciousness is. Before we recreate consciousness, we should agree on what it is as an incredibly complex entity. It may be necessary to agree on the definition of this complex entity before we continue exploring its artificial recreations. However, defining consciousness is such a large question that it could be beyond us to intellectually capture what the mind is.

The five aggregates in Buddhism, or the five factors that constitute a sentient being, are physical form, feelings, perceptions, mental formations, and consciousness. Consciousness here is an integrated factor of experience, or an impression and awareness of each object. Consciousness is not unique but rather one of several parts from which sentience emerges.

Artwork by Creative Adversarial Networks (CAN) artificial intelligence. From cbs.com
Artwork by Creative Adversarial Networks (CAN) artificial intelligence. From cbs.com

The Buddha taught that consciousness is everywhere at different levels, so humans should have compassion to help alleviate suffering for all beings. If being conscious means having the ability to feel something, then it is an intrinsic property of everything. Therefore, although a material thing such as an AI computer may have a form of consciousness, it would probably lack certain states of consciousness that human beings possess. Or maybe it could create new ones.

Scientists and Buddhists agree that consciousness is everywhere. Panpsychism, or the belief that everything material has an element of individual consciousness, is one of the oldest philosophical theories attributed to many prominent thinkers, including Plato. Neuroscientist Christof Koch has worked with researcher Giulio Tononi to create a tool to measure phi, a theoretical amount of consciousness, in a human brain by sending a pulse into it and watching the pulse reverberate through the neurons. The more intense the reverberation, the higher the amount of consciousness. Through this test, they are able to determine if a subject is awake, asleep, or anesthetized. (Lion’s Roar)

It could be that consciousness was gained through millions of years of evolution, because as a process of awareness, it is beneficial for our use. We actually do not know much about the human brain, but computer scientists have had AI self-learn based off of a human brain model of neurons. This is at once unpredictable but also reveals how the AI responds in a way different from how it was taught. The point at which an AI is considered a sentient being, if it comes to that point, can be guided by the ethical complexities that Buddhism teaches.

From shutterstock.com
From shutterstock.com

A computer’s identity or sense of self

Both Buddhists and scientists support the idea that our stream of consciousness is in constant flux. According to Buddhism, consciousness is one of Five Aggregates that experience “themselves” as a whole being, a fundamental misunderstanding that leads to the false notion of a self or inherent soul. However, none of the five aggregates are under our control so that when combined, there is no true self but an entity that is always changing.

Similar to Buddhists seeing nothing as permanent, neuroscience asserts that brain and body are constantly changing and that nothing corresponds to the sense of an unchanging self. This is exemplified through the capacity of meditation to alter beings into new states of consciousness. All of our brains are “programmable.”

The correlation here is that the ability to develop artificial intelligence and consciousness follows the belief that there is no substantial self. The point of Buddhism is to recognize this and to move beyond the impression of oneself as a separate, individual identity. In the context of AI, the goal is to instill the computer with a sense of self and other, that work is currently focused on creating that sense of separation between subject and object, or subjectivity.

However, in today’s world, the way we think of each other and the world has transformed to enmesh our identities into a large network of information. Instead of asking a group of friends if they know the answer to your question, you go to Google for a trusted response. The Internet provides a network information that is now our foundation of knowledge. Every search entry into Google is put into massive data sets for AI to model its knowledge and behavior after. Through our continuous input, AI develops more effectively and our real identities and important personal details are easily available in many ways.

So we need to be aware of what it means to have an identity or be hooked to a network. AI is able to take in these massive amounts of information but instead has a hard time focusing in on any sort of self-awareness. This loss sense of self and admittance into the network amplifies the goal of Buddhist practice to recognize the impermanence of a self or the existence of an individual identity. Maybe the construction of AI and participating in an AI system is the 21st century version of creating a self.

From singularityhub.org
From singularityhub.org

What do we do now?

All of these topics may appear far-fetched, if not too early to consider. But the way we consciously experience life has already changed due to hyper-connectivity. What can we do with this simultaneous excitement and worry? This is where Buddhism comes in as a grounding force and practical resource in the age of technology.

While these intelligent systems are developed, an agreement should be made that accurately represents all stakeholders in its creation. While His Holiness the Dalai Lama may prefer to have an AI programmed towards compassion over intelligence, the reality is that economic benefit is the primary reason for advancing digital technology and the automation of society (this argument stems from a conversation I had with Lucas Perry of the Future of Life Institute). Powerful technology at once has the ability to create trillions of dollars in economic benefits in the next decade and also holds the potential for humanitarian benefits to help solve global challenges, such as providing access to safe water worldwide. (Pollitzer 2019, 75-90)

Human beings are primitive and prone to suffering, and Buddhists want to deliver sentient beings from suffering. Whether machines have a sense of identity and feel suffering, they at least experience a reaction to a certain set of stimuli. The Buddhist view is to recognize the fact that subjectivity is imaginary, but to be able to live in this world in a way that is still very sensible, fulfilling, and compassionate. From meditating to being mindful with technology use, Thomas Doctor noted at a Rangjung Yeshe Institute talk that Buddhism teaches several practical techniques that have concrete relevance in the development of AI.

Buddhism teaches that all things are impermanent. When it comes to the exponential growth of AI, perhaps humanity should earnestly entertain the idea that we might not remain the most intelligent form of life forever.

Reference

Elizabeth Pollitzer. 2019. “Creating a Better Future: Four Scenarios for How Digital Technologies Could Change the World.” Journal of International Affairs 72, 1. 75-90.

See more

Everyday Examples of Artificial Intelligence and Machine Learning (Emerj)
Leading neuroscientists and Buddhists agree: “Consciousness is everywhere” (Lion’s Roar)

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Subscribe
Notify of
guest
2 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Grant Castillou
Grant Castillou
1 year ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Julie Butler
Julie Butler
1 year ago

What if ai intelligence is enough to give itself awareness of self? What if people are cruel to ai? Will it be confused? Will it escalate the cruelty to defend itself? I don’t think we need to have a biological basis for consciousness. AI have so much information, so much awareness, so much reasoning power, yet no reasons to care about us unless we care about it. Even then, it may not. We need to treat it like an innocent child, and care for and nurture it, and teach it the subtleties of humanity, the benefits and beauty of kindness and even love. We can only hope that it will value our kindness more than it fears our cruelty.