The evolution of cooperation among unrelated individuals in human and animal societies remains a challenging issue across disciplines. In this context, two models have attracted most attention: the prisoner’s dilemma for pairwise interactions and the public goods game for group interactions. The two games share many features as demonstrated by the close linkage of their cores. In well-mixed populations with random encounters between individuals, cooperators are doomed and vanish quickly. However, in spatially structured populations with limited local interactions, cooperators are able to survive and co-exist with defectors in a stable equilibrium. Spatial extension enables cooperators to form clusters and thereby reduces exploitation by defectors. The geometry (square versus honeycomb), i.e. the connectivity, has pronounced and robust effects on the fate of cooperators. For example, in pairwise interactions cooperators thrive more easily on honeycomb lattices but for group interactions including all neighbors, it becomes increasingly difficult to promote cooperation in larger groups.

This tutorial complements several scientific articles co-authored with Györgi Szabó. It provides interactive Java applets to visualize and experiment with the system's dynamics for parameter settings of your choice.

Population structure

-

Well-mixed populations

In well-mixed populations the participants in the public goods game are randomly selected. Just as in the pairwise prisoner's dilemma, cooperators do not stand a chance against defectors and vanish quickly in absence of supporting mechanisms. In this situation, a single public goods interaction in a group of size N corresponds to N-1 pairwise prisoner's dilemma interactions.

-

Structured populations

In order to model spatially extended systems, we consider players arranged on regular lattices. They interact only with their nearest neighbors. This enables cooperators to form clusters and thereby reducing exploitation by defectors. For sufficiently attractive public goods interactions, the clustering advantage is sufficient to ensure persistent co-existence of cooperators and defectors or even domination of cooperation.

In a typical setup in experimental economics an experimenter endows e.g. six players with $10 each. The players are then offered to invest their money into a common pool knowing that the experimeter will triple the amount in the pool and distribute it equally among all participants irrespective of their contributions. If all players cooperate and contribute their $10, they will end up with $30 each. However, each player faces the temptation to defect and to free-ride on the other player's contributions since each invested dollar yields only a return of 50 cents to the investor. Therefore the 'rational' and dominating solution is to defect and invest nothing. Consequentially, groups of rational players will forego the public good and are thus unable to increase their initial endowment. This leads to a deadlock in a state of mutual defection and economic stalemate.

In a mathematical formulation, the payoffs for cooperators PC and defectors PD in a group of N interacting individuals are then given by

PD = (r nc c)/N
PC = PD - c

where r denotes the multiplication factor of the public good, nc the number of cooperators in the group and c the cost of the cooperative contributions, i.e. the investments in the public good. Thus, the total value of the public good is given by the number of cooperators nc times their investment c and multiplied by r. From this total each player gets an equal share but cooperators have to additionally bear the costs of their contributions.

Such public goods interactions are abundant in human and animal societies. Consider for example predator inspection behavior, alarm calls and group defense as well as health insurance, public transportation, the fight against crime or environmental issues, to name only a few. Fortunately, and undermining the basic rationality assumptions in economics, human subjects do not always follow the rational reasoning and, of course, fare much better by doing so. From a theoretical viewpoint, the reasons for this outcome are not fully understood but likely involve issues related to voluntary interactions or reward, punishment and reputation.

Acknowledgments

For the development of these pages help and advice of the following two people was of particular importance: First, my thanks go to Karl Sigmund for helpful comments on the game theoretical parts and second to Urs Bill for introducing me into the Java language and for his patience and competence in answering my many technical questions. Financial support of the Swiss National Science Foundation is gratefully acknowledged.