Fictitious Play

In real life, players may be myopic and only best respond to other players without calculating or playing a Nash Equilibrium strategy. However, when playing a game repeatedly, they may adapt their strategy based on the history of other players’ actions and eventually converge to a Nash equilibrium. This process is learning in games, as players adapt their beliefs about other players and act rationally w.r.t the beliefs.

Fictitious play is an iterative method of calculating Nash Equilibrium using the above idea without actually playing the game repeatedly. Formally, each player maintain a belief that uniformly mixes the other players’ historical actions , and best respond . Note that in fictitious play, players have a common belief as it’s generated by the observed history of actions.

Convergence of strategies

Let be the strategy profile sequence generated by fictitious play. If there exists such that with for all , then is a PSNE. And if there exists such that with being a strict NE, then for all .

Convergence of beliefs

We say converges to a strategy profile in the time-average sense if as for all and , i.e.,

where is the marginal probability of on .