3. Two player zero sum games

In this Chapter we will restrict to a special class of two-player games where the theory simplifies a little.

Definition 3.1

A zero sum two player game is one where the two players’ payoffs always sum to zero.

In this case the game is determined by the payoff matrix \(A\) for player 1, as player 2 has payoff matrix has payoff matrix \(-A\).

When describing such games we will usually omit the matrix for player 2 from our tables.

Equivalently we can consider a zero sum game as one in which player 1 wants to maximise the payoff from matrix \(A\), while player 2 wants to minimise the payoff from the same matrix.

Example 3.2

Consider the game

Strategy 1 Strategy 2 Strategy 3 Strategy 4
Strategy 1 \(4\) \(3\) \(2\) \(5\)
Strategy 2 \(-10\) \(2\) \(0\) \(6\)
Strategy 3 \(7\) \(5\) \(2\) \(3\)
Strategy 4 \(0\) \(8\) \(-4\) \(-5\)

What strategies should the two players choose?

First consider the game from player 1’s viewpoint. They can assume that player 2 will play at their best, so for each choice by 1 of a row \(i\) they can assume that player 2 will at best have chosen a column \(j\) so as to minimize \(a_{ij}\).

So player 1 can guarantee that in the worst possible situation they can gain at least the maximum of these values as \(i\) varies. In other words player 1 can guarantee that in the worst possible situation they can get at least \[v^-=\max_{i=1,\ldots, p}\left(\min_{j=1,\ldots, q}a_{ij}\right)\] and we call \(v^-\) the lower value of the game. We also refer to \(v^-\) as player 1’s gain floor.

Next consider the game from player 2’s viewpoint. They can assume that player 1 will play at their best, so for each choice by 2 of a column \(j\) they can assume that player 1 will at best have chosen a row \(i\) so as to maximize \(a_{ij}\).

So player 2 can guarantee that in the worst possible situation player 1 can gain no more than the minimum of these values as \(j\) varies. In other words player 2 can guarantee that in the worst possible situation they will lose no more than \[v^+=\min_{j=1,\ldots, q}\left(\max_{i=1,\ldots, p}a_{ij}\right)\] and we call \(v^+\) the upper value of the game. We also refer to \(v^+\) as player 2’s loss ceiling.

So \(v^-\) represents the least amount that player 1 can be guaranteed to receive, and \(v^+\) represents the largest amount that player 2 can be guaranteed to lose.

Example 3.2 (continued)

We return to the game

Strategy 1 Strategy 2 Strategy 3 Strategy 4
Strategy 1 \(4\) \(3\) \(2\) \(5\)
Strategy 2 \(-10\) \(2\) \(0\) \(6\)
Strategy 3 \(7\) \(5\) \(2\) \(3\)
Strategy 4 \(0\) \(8\) \(-4\) \(-5\)

Example completed by hand in the lecture

It is an easy exercise to show that \(v^-\leq v^+\). We begin by considering the case where we have equality.

Definition 3.3

If \(v^-=v^+\) then we say that the given game has pure value \[v=v^-=v^+.\]

When a game has a pure value then life is (relatively) simple:

Proposition 3.4

A zero sum game has a pure Nash equilibrium if and only if the game has a pure value, \(v\) say, and any pure Nash equilibrium has payoff \(v\).

Proof: Suppose that \((i^*,j^*)\) is a pure Nash equilibrium. That is, that \[a_{ij^*}\leq a_{i^*j^*}\leq a_{i^*j}\] for all \(1\leq i\leq p\) and \(1\leq j\leq q\). Then \[\begin{eqnarray*} v^+&=& \min_{j=1,\ldots,q}\left(\max_{i=1,\ldots,p}a_{ij}\right)\leq \max_{i=1,\ldots,p}a_{ij^*}\\ &\leq & a_{i^*j^*} \leq \min_{j=1,\ldots,q}a_{i^*j}\leq\max_{i=1,\ldots,p}\left( \min_{j=1,\ldots,q}a_{ij}\right)=v^-.\end{eqnarray*}\] But we know that \(v^-\leq v^+\) and so \(v=v^-=v^+=a_{i^*j^*}\).

On the other hand, if \(v^-=v^+\) then \[\max_{i=1,\ldots,p}\left(\min_{j=1,\ldots,q}a_{ij}\right)= \min_{i=1,\ldots,q}\left(\max_{j=1,\ldots,p}a_{ij}\right).\] Let \(j^*\) and \(i^*\) be such that \[v^+=\max_{i=1,\ldots,p}a_{ij^*}\quad\text{and}\quad v^-=\min_{j=1,\ldots,q}a_{i^*j}.\] Then \[a_{i^*j}\geq v^-=v^+\geq a_{ij^*}\] for all \(i\) and \(j\). In addition taking \(j=j^*\) and \(i=i^*\) gives \(a_{i^*j^*}=v^+=v^-\). This satisfies the condition for a pure Nash equilibrium.

So far we have dealt with zero sum games where \(v^-=v^+\). But in general we know that we may have \(v^-<v^+\), so what should we do in this case?

We have seen that having a pure Nash equilibrium cannot occur in this case, but can we still determine the unique value of the game, or does it now depend on the choice of Nash equilibrium?

We will see that much of what we have done for pure strategies can be generalised.

We will generalise the definitions of \(v^-\) and \(v^+\) to more general max-min and min-max strategies, and use them to define the value of a zero sum game.

We will then see that every Nash equilibrium results in this value, and corresponds to a max-min/min-max pair.

We will begin with the definition of max-min and min-max strategies, which looks a little confusing, and then try to understand what this means in practice.

Definition 3.5

A max-min strategy \(\hat{\underline{x}}\) for player 1 in a bimatrix game is a strategy such that \[\min_{\underline{y}\in \Delta(S_2)}\mathbb E_1(\hat{\underline{x}},\underline{y}) = \max_{\underline{x}\in\Delta S_1}\min_{\underline{y}\in\Delta S_2} \mathbb E_1(\underline{x},\underline{y})\]

A min-max strategy \(\hat{\underline{y}}\) for player 2 in a bimatrix game is a strategy such that \[\max_{\underline{x}\in \Delta(S_1)}\mathbb E_1(\underline{x},\hat{\underline{y}}) = \min_{\underline{y}\in\Delta S_2}\max_{\underline{x}\in\Delta S_1}\mathbb E_1(\underline{x},\underline{y})\]

Notice that both definitions refer to the value of the game for player 1. We call the value given by the first equation the max-min payoff for player 1, and the value given by the second equation the min-max payoff for player 1.

The idea behind this definition is that the max-min strategy is the strategy for player 1 which has the greatest guaranteed payoff regardless of what player 2 does in response.

Similarly, the min-max strategy for player 2 is the strategy which provides player 1 with the least guaranteed payoff regardless of what player 1 does in response.

There are a number of difficulties with these definitions (which we gave for a general two player game).

First, we are assuming that the various maxima and minima are actually attained. Second, we are assuming that there exist strategies \(\hat{\underline{x}}\) and \(\hat{\underline{y}}\) with these properties.

When we restrict to only pure strategies as at the start of the Chapter, these obviously exist, as we are dealing with a finite number of possibilities.

As you will have seen in Real Analysis, when we are dealing with infinite collections we do not always have a maximum or a minimum value.

Let us now consider what these definitions mean for our zero sum games.

Our first result relates Nash equilibria and max-min/min-max strategies.

Proposition 3.6

Suppose that \(A\) is the payoff matrix for a zero sum game, and that \(\underline{x}^*\) and \(\underline{y}^*\) are strategies for players 1 and 2 respectively.

Then the following are equivalent:

  • The pair \((\underline{x}^*,\underline{y}^*)\) is a Nash equilibrium.
  • \(\underline{x}^*\) is a max-min strategy and \(\underline{y}^*\) is a min-max strategy, and \[\min_{\underline{y}\in\Delta S_2}\max_{\underline{x}\in\Delta S_1}\mathbb E_1(\underline{x},\underline{y})= \max_{\underline{x}\in\Delta S_1}\min_{\underline{y}\in\Delta S_2}\mathbb E_1(\underline{x},\underline{y}).\]

As we know by the Nash existence theorem that a Nash equilibrium always exists in our game we deduce the famous minimax theorem of von Neumann from 1928.

Theorem 3.7 (The Minimax Theorem)

Suppose that \(A\) is the payoff matrix for a zero sum game. Then \[\min_{\underline{y}\in\Delta S_2}\max_{\underline{x}\in\Delta S_1}\mathbb E_1(\underline{x}, \underline{y})= \max_{\underline{x}\in\Delta S_1}\min_{\underline{y}\in\Delta S_2}\mathbb E_1(\underline{x}, \underline{y})\] which is both the max-min and the min-max payoff for player 1. We call this common value the value \(v(A)\) of the game.

Although the minimax theorem follows from the Nash existence theorem, it was proved over 20 years earlier, and has a much simpler proof (see Section 8.4 in von Stengel for details).

We have now defined the value for any zero sum game. The results above have the following important consequences.

  • The definition of a max-min (or min-max) strategy does not depend on the choice made by the other player. We call such strategies optimal.
  • To form a Nash equilibrium we just need to find an optimal strategy for each player. Thus if a player has multiple optimal strategies, then any one of them can be used in any Nash equilbrium.
  • All Nash equilibria have the same value, which equals the value of the game.

As each player can find their optimal solutions independently, and their choice of an optimal strategy does not matter, zero sum games are much easier to analyse than the general games that we started with.