Search
Close this search box.

FEATURES

The Mathematics of Altruism

It may come as a surprise to some, that mathematics can venture into the world of ethics and support and prove the value of some fundamental moral principles such as kindness and forgiveness – the components of altruistic behavior. How can reason prove those categorically remote principles? Results from varying, ingeniously designed experiments which analyze the dynamics of interacting groups of individuals and their behavioral strategies, yield conclusions that have been professed by most ancient moral systems, including those inherent to religious teachings.

By way of example I’ll use the results from a number of experiments performed by the political scientist Robert Axelrod, subsequently published with the evolutionary biologist W.D. Hamilton in the journal Science in 1981. But before we can appreciate the results of the experiment, I’ll need to outline the model on which the experiment was based. The model is commonly referred to as Prisoner’s Dilemma, and it focuses on, and exposes the dynamics of mutual cooperation and cheating.

In the original version the dilemma has the following form: two prisoners, A and B are being independently interrogated for a crime they have allegedly committed. If both prisoners refuse to speak (cooperate), no evidence can be brought against them and they both receive a mild term, say six months. If both blame each other (defect), they will get 2 years as punishment for inconsistent testimonies. If, however A remains silent while B blames A – B goes free whilst A receives 5 years. The punishment and reward is symmetrical if A blames B, and B remains silent.

We can easily reformulate the problem with a point system, such that if both cooperate (remain silent) they receive 3 points each. If they both defect (blame each other) they receive 1 point each, and finally if one defects and the other cooperates, the untrustworthy partner in crime receives 5 points and the latter nothing.

What shall one do? This is the dilemma – to cooperate, or to defect? If we analyze the outcomes in terms of probability with the random variable being the payoff, and calculate the expected values of performing one strategy, or the other it becomes clear that defecting is the rational way to go, since the expected payoff in this case is 2 times greater than cooperating. (see below)

The boldface denotes what A does, the second is what B does, and on the right we consider only A’s payout values. Obviously the scenario is symmetric, so one perspective suffices.

CC: 3   A and B both cooperate (A gets 3 points)

CD: 0   A cooperates, B defects (A gets 0)

Expected value (of A’s payoff) = (1/2)*3 + (1/2)*0 = 3/2 + 0 = 3/2

DD: 1   A and B both defect (A gets 1 point)

DC: 5   A defects, B cooperates (A gets 5)

Expected value (of A’s payoff) = (1/2)*1 + (1/2)*5 = 1/2 + 5/2 = 3

Looking at the expected value of the above alternatives; defecting is more (doubly!) lucrative and hence naturally tempting. So a rational self-interested player, according to a standard view, should prefer a higher expected payoff to a lower one. (Stanford Encyclopedia of Philosophy). But is that, of benefit in the long run to a whole population of defectors? This question was investigated by Robert Axelrod. He announced a competition, to submit strategies of interaction within the Prisoner’s Dilemma scenario – the one that would accrete most points would be declared the winner. Two trials were run which included over 60 computer programs, which were set to play against each other over a large number of trials. Repeated trials were necessary, to simulate the temporal character of evolution from which generations emerge.

For the purpose of the analysis, some terms were defined: a kind program was one that never defected first, and a forgiving program was one which having been wronged previously by some opponent held no grudges for indefinite amount of time. The results were quite surprising – it turned out that in this jungle of cutthroat interactions, out of the 60 programs, the top scoring 15 were kind and forgiving programs, whereas the nasty and opportunistic defectors, unwilling to cooperate (and holding grudges) turned out to be the losers.

Incidentally the winner was the program named Tit for Tat submitted by the psychologist and game theorist Pofessor Anatol Rapoport. Tit for Tat was a kind program that never defected first, however if it was wronged by another it would remember that encounter and refuse to corporate next time it encountered the previous defector. But it would cooperate again if the other was willing to change its ways, so in this sense Tit for Tat was also forgiving.

The key result that follows from this data is that defectors can only temporarily succeed, whilst taking advantage of a population of kind individuals. However once the defectors spread, they start losing to each other’s wretched ways. A stable population is one consisting of kind and forgiving members. It shouldn’t come as a surprise that corporative tendencies would be selected for within a group in the long run, since this would be just the members’ way of optimizing the amount of energy/resources from their environment. This is what mathematics tells us – it is simply beneficial in the long run to be kind and forgiving.

Let’s take an instance of this dilemma appearing in nature. Seagulls need to ceaselessly groom themselves to avoid parasite (mostly tics) infestation. They can access most parts of their body except the tops of their heads. Here they need to rely on the kindness and cooperative spirit of other members of the flock to help them. But grooming another bird takes up precious energy – how do those birds behave? What strategy has proven to be the most effective and stable in aiding the survival of seagulls? A cooperate and reciprocal strategy, of course! Nature’s payoff currency however is not measured in points, but offspring. This is why most species, including primates display tendencies generally favoring cooperation.

But we know this already on an intuitive level, and have been implementing those strategies all along – our conscience (another evolutionary mechanism?) reminds us if we stray from succumbing to them. Mathematics in this case appears only to rigorously vindicate our intuitions, proving the existence of some fundamental laws of moral conduct. I believe that when we say moral conduct, we unknowingly refer to behavioral tendencies which guarantee long term stability of populations.

I see the teachings of world religions as an analysis of human life and an attempt to help. They intend to promote unselfish behavior, love and forgiveness. When you look at mathematical models for the evolution of cooperation you also find that winning strategies must be generous, hopeful and forgiving. In a sense, the world’s religions hit on these ideas first, thousands of years ago.

Now for the first time, we can see these ideas in terms of mathematics. Who would have thought that you could prove mathematically that, in a world where everybody is out for himself, the winning strategy is to be forgiving, and that those who cannot forgive can never win?

(New Scientist 19 March 2011)

Martin Novak (Professor of Mathematics and Biology at Harvard University)

OLYMPUS DIGITAL CAMERA

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments