Introduction:
Prisoner's Dilemma
by Christoph Hauert, Version 1.1, September 2005.
- Location:
- VirtualLabs
- » Introduction
- » Prisoner's Dilemma
The Prisoner's Dilemma, now well established as the predominant metaphor for studying cooperative interactions, made its first appearance in an experimental bargaining setup designed by Melvin Dresher and Merill Flood at the time also working at the RAND Corporation. Their aim was to illustrate that the Nash equilibrium is not necessarily the outcome realized in experimental and real world settings. Only some time later the game got its name and anecdotal story by Albert Tucker:
What should the suspects do? According to rational reasoning a suspect should confess and implicate the other because by doing so he is better off no matter what the other suspect does. If the other does not confess our suspect goes free instead of a sentence of 10 years and if the other does confess our suspect goes to prison for 10 years as opposed to 20 years had he not implicated the other. Since the charges are symmetrical, the rational reasoning is identical for both suspects. Thus, both will confess and live behind bars for 10 years instead of just a single year had they refused to give evidence - hence the dilemma. Quite fortunately for animal and human societies this is not what happens in general. Cooperative behavior does have a chance but it delicately depends on the circumstances.
A subsequent milestone and the foundation of the game's current fame represent Robert Axelrod's famous computer tournaments where submitted strategies competed in iterated Prisoner’s Dilemma interactions. Axelrod rephrased the game in a more intuitive context to describe cooperative interactions and since has become the standard formulation. The outcomes of a game with two players each having two behavioral options - to cooperate or to defect - are easily summarized in a 2×2 payoff matrix. Since the game is symmetrical, only the payoffs for the column player are shown.
A. For mutual cooperation (C) each player gets the reward R and for mutual defection (D) each gets the punishment P. If one player cooperates and the other defects, the cooperator is left with the sucker's payoff S while the defector gets away with the temptation to defect T. | B. In order to qualify for a Prisoner's Dilemma, the payoff values must satisfy the inequality T > R > P > S. Sometimes, i.e. in repeated encounters, it is additionally required that 2 R > T + S. This means that mutual cooperation returns the highest collective payoff. Both inequalities are satisfied by Axelrods payoff values above. | C. In a biological context it is useful to think in terms of costs and benefits of behavioral patterns. The act of cooperation bears costs c to the donor and provides a benefit b to the recipient with b > c > 0. This reduces the number of parameters to two and the resulting payoffs satisfy again both inequalities. | |||||||||||||||||||||||||||||||||||||||||||||
|
|
|
Axelrod's tournaments were won by a particularly simple strategy designed by Anatol Rapoport. The winner was called Tit-for-Tat - start by cooperating and then do whatever the opponent did in the previous move. Tit-for-Tat is a cooperative strategy but also retaliates against defectors. The mechanism that promotes cooperation was nicely summarized by Rapoport:
Thus, repeated interactions are one possible and very important approach to overcome the dilemma of the Prisoner's Dilemma. Since Axelrod's work a huge scientific literature has built up exploring the field of strategic interactions in the iterated Prisoner's Dilemma. Here and all other pages iterated games are not explored any further. Instead, other mechanisms capable of promoting and maintaining cooperation are discussed. This includes voluntary participation, effects of population structures as well as reward and punishment.
Further information on the Prisoner's Dilemma is provided in separate interactive tutorials: either in the context of 2×2 games or with respect to effects of population structures in the Prisoner's Dilemma as compared to the Snowdrift game (Hauert, Ch. & Doebeli, M. Nature 2004; see separate section).
Examples
Example 1 | The Prisoner's Dilemma in well-mixed populations.
| |||
---|---|---|---|---|
![]() |
Well-mixed populationsIn well-mixed populations, individuals interact with randomly chosen partners. They engage in a single (one-shot) Prisoner's Dilemma interaction. The individuals fitness corresponds to the average payoff achieved over a certain number of interactions. Each individuals reproduces at a rate proportional to its fitness and passes its strategy on to its offspring. In this setting cooperators are doomed and invariably the population will evolve to a homogenous state with all defectors. Verify this and run your own simulation by clicking on the image to the left. |
Example 2 | The Prisoner's Dilemma in lattice populations.
| ||||||
---|---|---|---|---|---|---|---|
![]() |
Lattice populationsInstead of randomly matching individuals one can approximate spatial extension by placing every individual on the site of a lattice and by confining interactions to the nearest neighbors. In contrast to well-mixed populations cooperators may now survive through cluster formation and co-exist with defectors. Small clusters of cooperators move around in a sea of defectors resembling a branching an annihilating random walk (which gives rise to interesting critical phase transitions). | ||||||
![]() |
Evolutionary kaleidoscopesIn lattice populations, when starting with a symmetrical initial configuration and using deterministic update rules, fascinating evolutionary kaleidoscopes can be observed. Please enjoy the mesmerizing spatio-temporal patterns by clicking on the image to the left. |
Selected publications on recent research results:
- Axelrod, R. & Hamilton, W. D. (1981) The Evolution of Cooperation, Science 211 1390-1396.
- Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books.