We will now restrict our attention to \(2\times 2\) nonzero-sum games. So far we have only looked for Nash equilibria in pure strategies. We will now show how to find all pure and mixed Nash equilibria.

We will let \(\underline{x}=(x,1-x)\) and \(\underline{y}=(y,1-y)\) be the mixed strategies for the two players. We know that the expected payoffs are respectively \[\mathbb E_1(\underline{x},\underline{y})=\underline{x}A\underline{y}^T\quad \text{and}\quad \mathbb E_2(\underline{x},\underline{y})=\underline{x}B\underline{y}^T.\] Recall that each player aims to maximise their own payoff assuming that the other player is doing the same.

Definition 4.6

Given \(0\leq z\leq 1\) we write \(\underline{z}=(z,1-z)\). The rational reaction set for player 1 is defined to be \[R_1=\{(x,y): 0\leq x,y\leq1\ \text{and}\ \max_{0\leq z\leq 1}\mathbb E_1(\underline{z},\underline{y})\leq \mathbb E_1(\underline{x},\underline{y})\}.\] Similarly the rational reaction set for player 2 is defined to be \[R_2=\{(x,y): 0\leq x,y\leq1\ \text{and}\ \max_{0\leq z\leq 1}\mathbb E_2(\underline{x},\underline{z})\leq \mathbb E_2(\underline{x},\underline{y})\}.\] If \((x,y)\in R_1\) then \(\underline{x}\) is a best response to \(\underline{y}\). Similarly if \((x,y)\in R_2\) then \(\underline{y}\) is a best response to \(\underline{x}\). Consequently if \((x^*,y^*)\in R_1\cap R_2\) then \(\underline{x}\) and \(\underline{y}\) form a Nash equilibrium.

Example 4.7

Consider the game

Strategy 1 Strategy 2
Strategy 1 \((2,1)\) \((-1,-1)\)
Strategy 2 \((-1,-1)\) \((1,2)\)

Example completed by hand in the lecture

The procedure just developed for \(2\times 2\) games can be generalised to \(p\times q\) games in a (theoretically!) straightforward manner.

Definition 4.8

The rational reaction sets for each player are defined as follows \[R_1=\{(\underline{x},\underline{y})\in\Delta S_p\times\Delta S_q: \mathbb E_1(\underline{x},\underline{y})=\max_{\underline{z}\in\Delta S_p}\mathbb E_1(\underline{z},\underline{y})\}\] \[R_2=\{(\underline{x},\underline{y})\in\Delta S_p\times\Delta S_q: \mathbb E_2(\underline{x},\underline{y})=\max_{\underline{z}\in\Delta S_q}\mathbb E_2(\underline{x},\underline{z})\}\] The set of Nash equilibria is then the set of elements in \(R_1\cap R_2\).

However, finding all common points is already complicated in the \(2\times 2\) case, so we will consider an alternative approach.

Whenever we want to maximise or minimise a function, we know from calculus that we should find the critical points and determine their type. We will now try to use this approach to find Nash equilibria.

We know that when applying calculus we have to be careful when dealing with points on the boundary of our region. For this reason we make the following definition.

Definition 4.9

An interior (or completely mixed) Nash equilibrium is one of the form \((x^*,y^*)\) where all of the \(x_i\) and \(y_j\) are non-zero.

We will now show how to use calculus to find all interior Nash equilibria.

We proceed in a series of steps.

  1. Let \(x_p=1-\sum_{i=1}^{p-1}x_i\) and \(y_q=1-\sum_{j=1}^{q-1}y_j\). Then each payoff function is a function of \(x_1,\ldots, x_{p-1}\) and \(y_1,\ldots, y_{q-1}\) and there are no pre-existing relations between these variables.

  2. Take the partial derivatives and solve the system of equations \[\frac{\partial \mathbb E_1}{\partial x_i}=0\quad\text{and}\quad \frac{\partial \mathbb E_2}{\partial y_j}=0\] with \(1\leq i\leq p-1\) and \(1\leq j\leq q-1\).

  3. If there is a solution to this system satisfying \(x_i>0\) and \(y_j>0\) for \(1\leq i\leq p\) and \(1\leq j\leq q\) then this is an interior Nash equilibrium.

Notice that we do not maximise \(\mathbb E_1\) over all of the variables \(x\) and \(y\) but only over the \(x\) variables. Similarly we only maximise \(\mathbb E_2\) over all of the \(y\) variables.

A Nash equilibrium for a player is the maximum of the player’s payoff over those variables which the player controls, assuming that the other player’s variables are held fixed.

Example 4.7 (revisited)

Consider the game

Strategy 1 Strategy 2
Strategy 1 \((2,1)\) \((-1,-1)\)
Strategy 2 \((-1,-1)\) \((1,2)\)

Example completed by hand in the lecture

Example 4.10

Consider the game

Strategy 1 Strategy 2 Strategy 3
Strategy 1 \((-2,-4)\) \((5,-2)\) \((1,4)\)
Strategy 2 \((-3,-3)\) \((2,1)\) \((3,4)\)
Strategy 3 \((2,3)\) \((1,1)\) \((3,-1)\)

Example completed by hand in the lecture

Warning: This method will not find pure Nash equilibria, or mixed Nash equilibria with some of the components being zero.

We have seen various methods for finding Nash equilibria of a general bimatrix game.

Given any bimatrix game, the Lemke-Howson algorithm provides a method for finding one Nash equilbrium. It is too lengthy to consider here, but details can be found in von Stengel Chapter 9 (sections 9.3, 9.7 and 9.8). If you have encountered the simplex algorithm in optimisation then this will seem familiar.

Although it is a very powerful tool, this algorithm does not find all Nash equilibria in a general game.