The public goods game is a generalization of the prisoner’s dilemma

Posted Sunday January 7 2024.

You may know the prisoners’ dilemma as a classic example in game theory. But for whatever reason, this example never resonated with me and I didn’t really get the point. Then, I was reading a book about evolutionary game theory (forgot which oops) that describes the public goods game and claimed it is a generalization of the prisoner’s dilemma. So, if we can understand one, we can understand the other!

The public goods game

In the public goods game, each player starts with some fixed allowance - for simplicity, let’s assume \(n\) is the number of players and they all get the same allowance of 100. Each player then has a choice: what proportion of their allowance to keep, and what proportion to put into a public pool. After all players have made the choice, each player’s score is the sum of 1) the part of their allowance they kept, and 2) the total amount in the public pool, divided by the number of players, multiplied by some factor \(R\).

What is the meaning of \(R\)? It sort of models the beneficial effect of cooperation. When \(R=1\), then public pool has no effect, and there’s no reason for anyone to contribute. When \(R>1\) however, the situation becomes more interesting. For concreteness let’s assume \(R=2\). Then:

Everyone contributing their entire allowance is thus the state which maximizes everybody’s score.

Unfortunately, this situation is not a Nash equilibrium, and we can see why: suppose all your peers contribute their full allowance, and so do you. Let’s name the corresponding score as the cooperation score: \[ \mathsf{score_{cooperate}}= 200 \] Alternatively, assuming your peers will contribute, you could choose to defect: to keep all your allowance and contribute none to the pool. In that case your score is given by your allowance, plus the pooled contributions of the remaining \(n-1\) players. \[ \mathsf{score_{defect}} = 100 + \frac{200 \times (n-1)}{n} \] As you may have guessed by now, \[ \mathsf{score_{defect} > score_{cooperate}} \] As a result, the “rational” behavior of each player is to keep all their allowance. And thus, the Nash equilibrium, the equilibrium that would presumably be reached by repeated competitive play, is where no player contributes anything, and receives a score of 100. How pitiful!

How is it a generalization of the prisoner’s dilemma?

It is a generalization if you take the number of players to be 2 (also reduce \(R\) to 1.5 because it needs to be less than the number of players), and instead of any split, players must put all or nothing into the pool.

In that case, there’s only 3 distinct outcomes:

Just as before, even though they would both benefit from cooperating, the Nash equilibrium is to defect.