Actions

Intro to Game Theory

From Santa Fe Institute Events Wiki

Tutors: Will Braynen, Simon Angus

Content (provisional)

  1. Why Game theory? When Game theory?
  2. Simultaneous Games
    1. The Nash Equilibrium (NE)
    2. Some standard games (Prisoner's Dilemma, Stag Hunt)
  3. Sequential Games
    1. Sub-game perfect NE
  4. Repeated Games
  5. Computational Examples (NetLogo)
    1. Games and Interaction structures
  6. Applications and Links to other fields
    1. Biology
    2. Economics
    3. Philosophy
    4. Psychology

Further reading and additional concepts

  1. Nash Equilibrium (NE):
    1. Kreps, D.M. (1987). "Nash equilibrium" in J. Eatwell, M. Milgate, and P. Newman (Eds.), The New Palgrave, 167-177.
  2. Correlated Equilibrium (CE):
    1. Aumann's Correlated Equilibrium (CE) concept (1974), which allows all players get higher payoffs than with Nash Equilibria (NE) in some games. A simple example of a CE is a traffic light. http://en.wikipedia.org/wiki/Correlated_equilibrium
    2. In moral philosophy, David Gauthier used CE as a formal example for when moral constraints can be pareto optimal (i.e. to the benefit of everyone). Similarly, CE could be used as an example for when social cooperation through institutions might be pareto optimal.
  3. Zero-sum games:
    1. In a two-player zer-sum game, players have opposing interests. In other words, if your payoff for a given outcome is k, then mine is -k.
    2. Zero-sum games and constant-sum games are equivalent because one can be converted into another through a linear transformation.
  4. Maximin:
    1. Von Neumann's maximin decision rule (1928), which results in an NE in two-player zero-sum (and hence constant-sum) games. Maximin is a decision rule which tells you to choose an action that will maximize your minimum (worst-case) payoff. Equivalently, you can think of this as minimizing your possible loss. Hence, maximin is a very risk-averse rule and most likely not result in an equilibrium when followed by all players outside (two-player) zero-sum games. In political philosophy, John Rawls's difference principle is derived from maximin.
  5. Socal Dilemmas (a.k.a. Tragedy of the Commons) and Public Goods Problems:
    1. Socal Dilemmas are defined by the order of the payoffs (so you could formalize this notion) and are characterized by having a dominant strategy that leads to a suboptimal outcome for all players. One type of a social dilemma is the Prisoner's Dilemma (PD).
    2. Robyn Dawes, "Social Dilemmas"
    3. G. Hardin, "Mutual Coercion Mutually Agreed upon by the Majority of the People Affected." (This paper implicitly assumes certainly about the size of the public good)
    4. Amnon Rapoport (not Anatol Rapoport) has done excellent work in behavioral game theory where there is uncertainty about the size of the public good, which is a more realistic model of real-world problems (e.g. overfishing).
    5. Prisoner's Dilemma (PD) is one kind of a social dilemma and can be generalized to a n-player game where n > 2:
      1. One-shot PD (i.e. play only once) - this is the only one we talked about.
      2. PD iterated a finite number of times - using backwards induction, you can show that the optimal strategy is to defect. (NB: optimal strategy does not imply optimal or pareto efficient outcome; instead it means best response to the other players.)
      3. PD iterated an infinite number of times - can't use backwards induction here, so cooperation is rational.
      4. Kreps et al's analytic result: if you relax rationality assumptions about other players, cooperation can become rational in a finite iterated PD. Reference: Kreps, Milgrom, Roberts, and Wilson in Journal of Economic Theory 27, 245-252 (1982)
      5. Tit-for-tat: in his book Evolution of Cooperation, Axelrod argues that although a repeated PD is a good model for many social interactions, cooperation can still evolve if players are not fully rational (or not "hyper-rational" as some critics of the full rationality have dubbed the full rationality assumption). Tit-for-tat is the strategy that Axelrod argues often evolves in both the real world, as well as simulations (his famous "Axelrod Tournament").
      6. In political philosophy, Hobbes (in his book Leviathan) is traditionally interpreted to argue that life without government (what political philosophers call a "state of nature") would have the structure of a PD, in which life would be "solitary, poor, nasty, brutish, and short."
  1. More on behavioral game theory: Camerer, Loewenstein, and others.
  2. Cooperative game theory, which includes bargaining theory. (We only talked about non-cooperative game theory.)
  3. Prospect Theory:
    1. We know most people have decreasing marginal utility (and so their utility function is concave), which means they are risk-averse. Prospect Theory adds that this only holds when people perceive themselves to be dealing with gains, not losses - when they are not "in the red." When people are in the red (dealing with losses), their utility function will actually be convex (not concave!) which means they'll be risk-seeking, trying to get back to their perceived break-even point. http://en.wikipedia.org/wiki/Prospect_theory
    2. Also, note that marginal utility has not only been confirmed in the lab for gains (as a psychological-empirical fact about people's behavior), but it is also needed from a theoretical point of view to resolve the St. Petersburg Paradox (which of course is an idealization as it assumes infinite resources on part of the casino).
  4. Evolutionary game theory (tutorial on Wednesday)

Interested? Please put below ...

Paul H.
Kristen
Saleha
Frederic
Alex Healing
Aaron
nathan menke
Brian Lawler
Mollie Poynton Heather
rafal
Monika
Olaf Bochmann