Along with promises of a technologically exciting future, there are some potentially frightening aspects of recent developments in AI technology. One widely cited example is that of “deep fakes,” which are created with multimedia software using a popular machine-learning technique known as generative adversarial neworks (GAN) to distort what people do or say in a realistic manner. GAN demonstrates how digital technologies can be used to manipulate or completely distort reality.
This ability to distort what people say has long existed in lower-tech realms, shaping much of history—gossip, false information, deliberate seeding of misinformation, and out of context propaganda do not require advanced technology to influence and shape politics or economics. It has always been relatively easy to spoof an email address, enabling criminals to send fake emails on behalf of legitimate account owners.
Obviously, the more advanced the technology, the easier, faster, and more convincingly that evidence can be falsified and depoloyed, increasing the speed and impact of the misinformation. Falsification is an art in itself, in which an increasing number of technologists are becoming masters. Due to a reproducibility crisis in machine learning, certain machine behaviors are increasingly difficult to replicate and demonstrate.
Most facts in politics, history, and science as portrayed by the media are false to some degree, sometimes simply because they are incomplete or imperfectly reported. On a more subtle level, AI may one day facilitate deeper distortions. For example, work is being done on mimmicking—and one day perhaps manipulating—how the human mind connects logical axioms to behavior observed in the world.
In logic, axioms are considered truth values that are central to information processing and reaching valid conclusions. Axioms, truth values, and logic are the backbone of all computer programs, including AI. The ability to capture “truth” and to represent it accurately within a program is central to computer science and engineering.
Among the concerns emerging from current developments in technology are: 1) axioms are not actually fact-checked; and 2) anything labeled as an axiom can be automatically considered by the system to be a true fact. But this may not be the case. In a classical case of teaching first-order logic, the example is given of induction reasoning—where the entire chain of reasoning is based on the statement “all birds fly.” (University of Texas) This statement is actually not true.
After years of using this example in classrooms and in scholarly discourse, entirely by chance, one of the information scientists involved discovered that not only is it untrue that all birds fly, but also that some animals that do fly are not birds—such as bats. Entire reasoning chains can fail by reaching wrong conclusions due to a simple and easily verified generalization or an unchecked assertion.
A lack of fact-checking in reasoning is fatal in human decision-making as well as in AI. How many AI programs are being built that rely on unchecked facts? Fact-checking is non-trivial, cannot easily be automated, and can only be done well by expert humans—and even then with some margin of error (because humans err).
These days, any clever kid can program and potentially deploy AI, including malware. Yet ensuring the accuracy of facts in the age of reproducibility is a near impossible task, even for the most knowledgeable of experts. Those caught up in the AI frenzy are completely overlooking integrity, the validity of truth, and fact-checking.
There is a trend toward decoupling logical assertions and representation.* This allows AI to be developed with creative abilities. From a computational and scientific viewpoint, to create an intelligent algorithm capable of creating original, unique faces of people who do not exist in the real world, or to design an original artistic pattern, is indeed amazing.
Given the inability of humanity to solve real world problems—ranging from petty crime, to disease, to humanitarian and environmental disasters, to disinformation and bad politics—shouldn’t AI be prioritizing addressing and reducing such problems? AI systems should be designed to enhance, facilitate, promote, and leverage human intelligence, rather than to simply mimic its most salient features. This is because these features include degrees of fallibility, error, a recognition of mortality, and even shutdown, as in the case of traumatic shock or psychological or psychiatric disorder.
Vast populations of gifted, intelligent individuals either do not have the resources necessary for an education, or do gain advanced degress yet are unable find employment in a world that capitalizes on superficiality and ignorance.** Why are we rushing to develop AI when so much human intelligence is wasted? Probably because it is scientifically exciting, and nowadays technically feasibily on a wide scale.
Now we can turn to Buddhism. Many interesting and valuable papers and articles have been published on the topic of Buddhism and AI. Let us look at the beginnings of Buddhist definitions of key terms including intelligence, truth, and logic.
Intelligence: What is human intelligence? Most animals are intelligent, capable of amazing cognitive feats, and in some cases, alongside their animal instincts, animals are also capable of compassion. What distinguishes humans from animals is speech (logos), which is closely related to higher cognition and intellectual, discriminative, and communication ability. The concept of buddhi has vast meaning and a multitude of interpretations, including intellect, understanding, intelligence, and talent. For a better and more useful understanding of intelligence for the purpose of AI development, we might benefit from taking into account the point of view of ancient spiritual traditions.
Truth: In classical Western logic (which is a branch of both math and philosophy), truth is necessary for reasoning. In Buddhism, adherence to truth is one of the precepts (part of the Noble Eightfold Path). As to the question what truth is, even the best qualified experts cannot answer with a single, narrow definition. Truth can be a lot of things, as His Holiness Chamgon Kenting Tai Situpa told an audience at Stanford University in 2016.***
Logic: Buddhist logic is based on ancient Indian logic, of which there is a vast and profound body of knowledge. Its existence should be known, even if only cursorily, by all computer scientists. Buddhist logic, although it shares the premise-consequence constructs of first-order logic in Western thinking, may at times come across as questionable according to Western thought. A simple example is given by the episode in one of the previous lives of the Buddha, in which he (as a bodhisattva) gave up his body to feed a starving tigress. In Western logical terms, this may seem like a very poor decision (don’t try this at home!).****
Despite having embraced Buddhism many years ago, I have struggled to come to terms with the soundness of the bodhisattva’s decision to become animal fodder. Would a wise and compassionate practitioner not be able to better serve the animal realm by staying alive and perhaps becoming devoted to feeding wild animals and the poor for the rest of his or her life?
Although many parables in Buddhism are intended to be symbolic or metaphorical, some Buddhist logic may seem counterintuitive at times (or even dumb!). And perhaps it should be taken with a pinch of salt? I guess the lesson for AI is that there may be different behavioral paths and different logical choices, some of which may be inscrutable to the average unenlightened mind—such as practicing extreme, unconditional generosity—that may yield unexpected outcomes, such as becoming a Buddha. In principle, that would mean being a better person, as well as potential perks such as supercognition and the extraordinary traits referred to as siddhis.
In summation, humans could greatly benefit from advanced technologies that can help us to develop our vast potential, beyond pure mechanical reasoning and the automation of vision and speech functions. We should distinguish between AI that pretends to be human—trying to make robots look and bahave more like us, although humans are still rugged and unevolved in the spiritual sense—and AI that extends and supports the development and cultivation of human intelligence and human potential, with the understanding that humans are the animals with the highest spiritual potential. That potential is still largely underdeveloped and little understood outside of the Buddhadharma.
** Rising number of jobless PhD scholars causes concern (University World News)
*** “Truth in a Multi-Religious World” (Stanford University)
**** “The Precious Discourse on the Blessed One’s Extensive Wisdom That Leads to Infinite Certainty” (84000: Translating the Words of the Buddha)