Search
Close this search box.

FEATURES

In a World of Human Ignorance, Can Artificial Intelligence Help?

Image created by DALL-E 2 with prompt by the author

I think it’s really important that we explain, to educate people that this is a tool and not a creature.

(Sam Altman, OpenAI CEO on ChatGPT)

In a recent conversation with the MIT-based computer scientist and podcaster Lex Fridman, OpenAI CEO Sam Altman covered a number of topics, including his view on the potential sentience of current and future artificial intelligence (AI) systems. While Altman has at times expressed deep concern about the potential for AI to do harm in the world, in this interview he was clear that what AI does will ultimately depend on what humans choose to use it for, not that it might come to think for itself in the way humans and animals do, adding: “I think it is dangerous to project ‘creatureness’ onto a tool.” (YouTube)

As we continue to watch the growth of AI, it is becoming increasingly clear that this is just one danger among many. As a Buddhist who is deeply concerned with understanding the nature of things and a person with a deep concern for suffering across sentient life, I have sought to understand AI’s potential for good and for harm. Suffice it to say, I have only scratched the surface. But what I have found both deflates the “hype” claims about AI’s potential and sooths any robot-killer worries that might take flight in my imagination.

The “creatureness” that Altman refers to is likely another way of defining sentience or consciousness. These are distinct terms, but closely related in modern philosophical conversations. In fact, Robert Van Gulick, writing in the Stanford Encyclopedia of Philosophy, offers sentience as one of the definitions of consciousness. Other definitions include wakefulness, specifying that a conscious being is only truly conscious when awake and using that awareness, and what it is like, following the famous 1974 paper by Thomas Nagel in which we confront the limitations of our own understanding and imagination in trying to imagine what it is like to be a bat.

In Buddhist thought, consciousness is most commonly found in the list of the five aggregates (Skt. skandhas). These are described by Amber Carpenter:

‘Form’ (rūpa), is the physical; ‘feeling’ (vedanā) is that portion of experience that can be pleasant and painful; perception or cognition (saṁjñā) is that part of experience that can be true or false; the saṁskāras are a capacious category, including most importantly volitions and various emotions; finally, consciousness (vijñāna) is awareness of [the] object, the union of content with the mental activity that has content.

(Indian Buddhist Philosophy 29)

These are best thought of as ways of experiencing rather than as ontological foundations.

Is consciousness special?

The question of how to determine who or what is conscious has dogged Western philosophers for some time. Rene Descartes (1596–1650) is often cited as the first thinker to apply rigorous philosophical method to the question. A devout Catholic, Descartes determined that self-awareness was key to consciousness and believed that non-human animals fell short of this marker. Christianity had long offered a clear demarcation between humans, with souls, and animals, without, and Descartes did not break with this aspect of the tradition.

Buddhists, however, have generally always attributed consciousness to animals. They, too, are capable—in more limited ways—of thinking and carrying out intentional actions (karma) that will impact their lives and future rebirths.

With the rise of the sciences and materialist philosophies, consciousness has been once again questioned. If we accept the premise that ultimately, we are entirely physical in nature, then how does consciousness arise? Why does it arise in us and not in rocks? Most answers have hinged around the sheer complexity of our brains and bodies, saying that out of this complexity, consciousness has “arisen.” However, there are multiple competing theories for how this works, as well as materialists who claim that consciousness may be beyond our ability to fully understand.

Driving the complexity a bit further is the realization that humans often project intelligence or agency into a world where it likely doesn’t exist. We develop feelings, for instance, for characters in books knowing that they are fictional. And when Tom Hanks’ character in Cast Away developed a bond with a volleyball, we empathized, knowing that we too might well do the same. In a study dating to the 1940s, researchers showed an animation of two circles and a triangle moving around the shape of a house. When describing what they saw, nearly all participants made up “a social plot in which the big triangle is seen as an aggressor. Studies have shown that the movements of the shapes cause automatic animistic perceptions.” (Carnegie Mellon)

What is AI?

Defining AI is just as contentious as defining consciousness. On the one hand, AI can be anything that solves a complex problem. The auto-suggestions you receive when you’re typing search terms into Google are generated by a form of AI. More complex forms of AI play chess, give driving directions on your phone, and provide advertisements that match your interests and behaviors.

All of these rely on increasingly complex programmed algorithms. The newest forms of AI—the ones that are generating the most excitement—use what are called “neural networks.” Philosophers point out that this name may itself be misleading, as they mimic only simplified aspects of biological neurons. Nonetheless, these networks are capable of changing their responses over time, mimicking the process of learning in humans.

Again, we might be careful about the terms we use, as “learning” is something that we might say requires consciousness. A human learns. A dog learns. Perhaps a goldfish even learns. But the ball in a pinball machine doesn’t learn to hit the correct spots—everything it does is dependent on the prior inputs from the human and its interactions with the machine in which it functions.

Critics, such as the former Google researcher Timnit Gebru, point out that this is precisely where problems arise in large language models (LLM)—AI models such as ChatGPT. Gebru noted that the training data fed to these large AI programs was biased and at times hateful, leading to the concern that the software would replicate these biases and hateful speech in its output. In 2016, a Twitter-bot developed by Microsoft quickly began to send racist tweets after it was programed to learn from fellow users of the platform. It was taken down by developers in fewer than 24 hours after its launch. (The New York Times)

Liberation or entrenched power?

This raises a second, related problem. Proponents of the newest forms of AI claim that they will have amazing, borderline miraculous powers. It certainly has great capability. But, as investor guru Warren Buffet said at a recent meeting: “With AI . . . it can change everything in the world, except how men think and behave. That’s a big step to take.” (EFT Central)

Social activist Naomi Klein elaborates:

There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

(The Guardian)

Klein notes that this is not how AI is being launched. Instead, large for-profit corporations are being allowed to copy enormous amounts of text and images created by humans—without permission or attribution—in order to output its own mimicked results.

Many of the grand promises, it is argued, are simply driving hype. This hype, in turn, increases the valuations of companies making AI. Even the negative hype, following the old dictum, “There’s no such thing as bad press,” can have the effect of drawing greater attention to the corporations.

As American cognitive scientist Gary Marcus noted last year, we’re no longer hearing from academic AI researchers. We’re hearing more and more from corporate CEOs: “And corporations, unlike universities, have no incentive to play fair. Rather than submitting their splashy new papers to academic scrutiny, they have taken to publication by press release, seducing journalists and sidestepping the peer-review process. We know only what the companies want us to know.” (Scientific American)

This is dangerous. But it also follows a path that has become increasingly common for new technology in recent years. Sometimes this has led to outright fraud, such as the scandals surrounding Elizabeth Holmes’ company Theranos, or Sam Bankman-Fried’s FTX. Sometimes it has simply led to much-vaunted hype that didn’t pan out, as in the cases of Google’s augmented-reality glasses, Meta’s Multiverse virtual reality, or non-fungible tokens (NFTs).

A tentative solution

Living in a time of accelerated technological progress is exciting. In so many ways, we are fortunate and much of this new technology can and will reduce human suffering when used wisely. Wisdom arises not as a result of knowing or using the latest technology best, but of deliberating, analyzing, and practicing traditions that represent thousands of years and millions of human lives, each refining, altering, and passing on their own best accomplishments. As Buddhists, this requires us to ask serious questions about the promises and potential pitfalls of AI in our ethical, meditative, and philosophical lives.

My friend Douglass Smith has some intriguing videos on his YouTube channel exploring AI and aspects of Buddhist thought and practice. I encourage you to check them out. Here is one exploring key values we might want in future AI and how we might help get there.

While those of us with humanities backgrounds might take a number of different approaches to AI and other new technologies, it is essential that our voices be part of the conversation. As Leon Wieseltier, editor of the humanistic journal Liberties, writes:

There is no time in our history in which the humanities, philosophy, ethics and art are more urgently necessary than in this time of technology’s triumph. Because we need to be able to think in nontechnological terms if we’re going to figure out the good and the evil in all the technological innovations. Given society’s craven worship of technology, are we going to trust the engineers and the capitalists to tell us what is right and wrong?

(The New York Times)

In the spirit of non-technological thinking, I’ll borrow a conclusion often utilized by fellow BDG columnist mettamorphsis: a song.

There is much more to say, from different Buddhist schools’ interpretations of consciousness or about what might come next in the evolution of machine-based creativity. We know that the future is wide open. But we also know that limits and shortcomings tend to arise, even for the greatest inventions and innovations.

Amana microwave advertisement from the 1970s. From cambridge.org

I’ll end with one more video, this one from Adam Conover, a philosophy graduate, interviewing Emily Bender, a linguist from the University of Washington, and Timnit Gebru, former Google researcher and founder of DAIR, the Distributed AI Research Institute.

References

Carpenter, Amber. 2014. Indian Buddhist Philosophy. New York: Routledge.

See more

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 (YouTube)
Doug’s Dharma (YouTube)
Don’t Kill ‘Frankenstein’ With Real Frankensteins at Large (The New York Times)
Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk. (The New York Times)
Don’t Kill ‘Frankenstein’ With Real Frankensteins at Large (The New York Times)
Consciousness (Stanford Encyclopedia of Philosophy)
Psychology: Perceiving Humanlikeness (Carnegie Mellon)
Notable Takeaways from Berkshire Hathaway’s Annual Meeting (EFT Central)
AI machines aren’t ‘hallucinating’. But their makers are (The Guardian)
Artificial General Intelligence Is Not as Imminent as You Might Think (Scientific American)
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter (DAIR)

Related features from BDG

Dharma, Perfect Knowledge, and Artificial Intelligence
Does Artificial Intelligence Have Buddha-nature?
Scaling Intelligence in an AI-dominated Future
ChatGPT + Socially Engaged Buddhism, Part I
The Rise of Artificial Intelligence and What it Means for Our Jobs

More from Western Dharma by Justin Whitaker

BDG Special Issue: Digital Dharma

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Subscribe
Notify of
guest
1 Comment
Oldest
Newest
Inline Feedbacks
View all comments
Grant Castillou
Grant Castillou
10 months ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461