Actions

Intro to Game Theory: Difference between revisions

From Santa Fe Institute Events Wiki

No edit summary
 
(65 intermediate revisions by 12 users not shown)
Line 14: Line 14:
## Biology
## Biology
## Economics
## Economics
## Political Philosophy
## Philosophy
## Physics
## Psychology
 
= Further reading and additional concepts =
# Nash Equilibrium (NE):
##Kreps, D.M. (1987). "Nash equilibrium" in J. Eatwell, M. Milgate, and P. Newman (Eds.), ''The New Palgrave'', 167-177.
# Correlated Equilibrium (CE):
##Aumann's Correlated Equilibrium (CE) concept (1974), which allows all players get higher payoffs than with Nash Equilibria (NE) in some games. A simple example of a CE is a traffic light. http://en.wikipedia.org/wiki/Correlated_equilibrium
##In moral philosophy, David Gauthier used CE as a formal example for when ''Italic text''moral constraints can be ''pareto optimal'' (i.e. to the benefit of everyone). Similarly, CE could be used as an example for when social cooperation through institutions might be pareto optimal.
# Zero-sum games:
## In a two-player zer-sum game, players have opposing interests.  In other words, if your payoff for a given outcome is k, then mine is -k.
## Zero-sum games and constant-sum games are equivalent because one can be converted into another through a linear transformation.
# Maximin:
##Von Neumann's ''maximin'' decision rule (1928), which results in an NE in two-player zero-sum games (and hence in two-player constant-sum games).  ''Maximin'' is a decision rule which tells you to choose an action that will maximize your minimum (worst-case) payoff.  Equivalently, you can think of this as minimizing your possible loss.  Hence, ''maximin'' is a very risk-averse rule and most likely not result in an equilibrium when followed by all players outside (two-player) zero-sum games. 
##In political philosophy, John Rawls's d''ifference principle'' is derived from ''maximin''.
# Socal Dilemmas (a.k.a. Tragedy of the Commons) and Public Goods Problems:
## Socal Dilemmas are defined by the order of the payoffs (so you could formalize this notion) and are characterized by having a dominant strategy that leads to a suboptimal outcome for all players.  One type of a social dilemma is the Prisoner's Dilemma (PD).
## Robyn Dawes, "Social Dilemmas"
## G. Hardin, "Mutual Coercion Mutually Agreed upon by the Majority of the People Affected." (This paper implicitly assumes certainly about the size of the public good)
## Amnon Rapoport (not Anatol Rapoport) has done excellent work in behavioral game theory where there is uncertainty about the size of the public good, which is a more realistic model of real-world problems (e.g. overfishing).
## Prisoner's Dilemma (PD) is one kind of a social dilemma and can be generalized to a n-player game where n > 2:
### One-shot PD (i.e. play only once) - this is the only one we talked about.
### PD iterated a finite number of times - using ''backwards induction'', you can show that the optimal strategy is to defect. (NB: ''optimal strategy'' does not imply optimal or pareto efficient outcome; instead it means ''best response'' to the other players.)
### PD iterated an infinite number of times - can't use backwards induction here, so cooperation is rational.
### Kreps et al's analytic result: if you relax rationality assumptions about the other player, cooperation can become rational in a finite iterated PD.  Namely, the Kreps paper analyzes the scenario when you are fully rational, but you are uncertain about whether the other player is fully rational or a tit-for-tat player.  Reference: Kreps, Milgrom, Roberts, and Wilson in ''Journal of Economic Theory'' 27, 245-252 (1982)
### Tit-for-tat: in his book ''Evolution of Cooperation'', Axelrod argues that although a repeated PD is a good model for many social interactions, cooperation can still evolve if players are not fully rational (or not "hyper-rational" as some critics of the full rationality have dubbed the full rationality assumption).  Tit-for-tat is the strategy that Axelrod argues often evolves in both the real world, as well as simulations (his famous "Axelrod Tournament").
### In political philosophy, Hobbes (in his book ''Leviathan'') is traditionally interpreted to argue that life without government (what political philosophers call a "state of nature") would have the structure of a PD, in which life would be "solitary, poor, nasty, brutish, and short."
# More on behavioral game theory: Camerer, Loewenstein, and others.
# Cooperative game theory, which includes bargaining theory.  (We only talked about non-cooperative game theory.)
## See Thomas Schelling's ''Strategy of Conflict''
# Prospect Theory:
## This is about the psychology of decision-making, not really game theory (although it is relevant to behavioral game theory and behavioral economics).
## Kahneman won the Nobel Prize in Economics for his joint work on this with Tversky.
## Here's the gist.  We know most people's utility functions exhibit decreasing marginal utility.  In regular English, this simply means that each additional unit of something is worth less to us.  If you have no money, for example, you might be willing to work for five dollars an hour, but if you are millionaire, that pay wouldn't be worth it to you. (This would mean that your utility function is concave in shape, which would in turn mean that you are risk-averse.)  Prospect Theory adds that this only holds when people perceive themselves to be dealing with gains, not losses - when they are not "in the red."  When people are in the red (dealing with losses), their utility function will actually be convex (not concave!) which means they'll be risk-seeking, trying to get back to their perceived break-even point. [[Image:Prospect-theory.jpg]] http://en.wikipedia.org/wiki/Prospect_theory
## Also, note that marginal utility has not only been confirmed in the lab for gains (as a psychological-empirical fact about people's behavior), but it is also needed from a theoretical point of view to resolve the St. Petersburg Paradox (which of course is an idealization as it assumes infinite resources on part of the casino).
# Evolutionary game theory (tutorial on Wednesday)
## Brian Skyrms' ''The Stag Hunt and Social Structure''
## H. Peyton Young's ''Individual Strategy and Social Structure''


= Interested? Please put below ... =
= Interested? Please put below ... =


Paul H. <br>
Paul H. <br>
Kristen
Kristen <br>
 
Saleha <br>
Saleha
Frederic <br>
[[Alex Healing]] <br>
Aaron <br>
nathan menke <br>
[[Brian Lawler]] <br>
[[Mollie Poynton]]
Heather<br>
rafal<br>
Monika <br>
[[Olaf Bochmann]]

Latest revision as of 05:44, 20 June 2007

Tutors: Will Braynen, Simon Angus

Content (provisional)

  1. Why Game theory? When Game theory?
  2. Simultaneous Games
    1. The Nash Equilibrium (NE)
    2. Some standard games (Prisoner's Dilemma, Stag Hunt)
  3. Sequential Games
    1. Sub-game perfect NE
  4. Repeated Games
  5. Computational Examples (NetLogo)
    1. Games and Interaction structures
  6. Applications and Links to other fields
    1. Biology
    2. Economics
    3. Philosophy
    4. Psychology

Further reading and additional concepts

  1. Nash Equilibrium (NE):
    1. Kreps, D.M. (1987). "Nash equilibrium" in J. Eatwell, M. Milgate, and P. Newman (Eds.), The New Palgrave, 167-177.
  2. Correlated Equilibrium (CE):
    1. Aumann's Correlated Equilibrium (CE) concept (1974), which allows all players get higher payoffs than with Nash Equilibria (NE) in some games. A simple example of a CE is a traffic light. http://en.wikipedia.org/wiki/Correlated_equilibrium
    2. In moral philosophy, David Gauthier used CE as a formal example for when Italic textmoral constraints can be pareto optimal (i.e. to the benefit of everyone). Similarly, CE could be used as an example for when social cooperation through institutions might be pareto optimal.
  3. Zero-sum games:
    1. In a two-player zer-sum game, players have opposing interests. In other words, if your payoff for a given outcome is k, then mine is -k.
    2. Zero-sum games and constant-sum games are equivalent because one can be converted into another through a linear transformation.
  4. Maximin:
    1. Von Neumann's maximin decision rule (1928), which results in an NE in two-player zero-sum games (and hence in two-player constant-sum games). Maximin is a decision rule which tells you to choose an action that will maximize your minimum (worst-case) payoff. Equivalently, you can think of this as minimizing your possible loss. Hence, maximin is a very risk-averse rule and most likely not result in an equilibrium when followed by all players outside (two-player) zero-sum games.
    2. In political philosophy, John Rawls's difference principle is derived from maximin.
  5. Socal Dilemmas (a.k.a. Tragedy of the Commons) and Public Goods Problems:
    1. Socal Dilemmas are defined by the order of the payoffs (so you could formalize this notion) and are characterized by having a dominant strategy that leads to a suboptimal outcome for all players. One type of a social dilemma is the Prisoner's Dilemma (PD).
    2. Robyn Dawes, "Social Dilemmas"
    3. G. Hardin, "Mutual Coercion Mutually Agreed upon by the Majority of the People Affected." (This paper implicitly assumes certainly about the size of the public good)
    4. Amnon Rapoport (not Anatol Rapoport) has done excellent work in behavioral game theory where there is uncertainty about the size of the public good, which is a more realistic model of real-world problems (e.g. overfishing).
    5. Prisoner's Dilemma (PD) is one kind of a social dilemma and can be generalized to a n-player game where n > 2:
      1. One-shot PD (i.e. play only once) - this is the only one we talked about.
      2. PD iterated a finite number of times - using backwards induction, you can show that the optimal strategy is to defect. (NB: optimal strategy does not imply optimal or pareto efficient outcome; instead it means best response to the other players.)
      3. PD iterated an infinite number of times - can't use backwards induction here, so cooperation is rational.
      4. Kreps et al's analytic result: if you relax rationality assumptions about the other player, cooperation can become rational in a finite iterated PD. Namely, the Kreps paper analyzes the scenario when you are fully rational, but you are uncertain about whether the other player is fully rational or a tit-for-tat player. Reference: Kreps, Milgrom, Roberts, and Wilson in Journal of Economic Theory 27, 245-252 (1982)
      5. Tit-for-tat: in his book Evolution of Cooperation, Axelrod argues that although a repeated PD is a good model for many social interactions, cooperation can still evolve if players are not fully rational (or not "hyper-rational" as some critics of the full rationality have dubbed the full rationality assumption). Tit-for-tat is the strategy that Axelrod argues often evolves in both the real world, as well as simulations (his famous "Axelrod Tournament").
      6. In political philosophy, Hobbes (in his book Leviathan) is traditionally interpreted to argue that life without government (what political philosophers call a "state of nature") would have the structure of a PD, in which life would be "solitary, poor, nasty, brutish, and short."
  6. More on behavioral game theory: Camerer, Loewenstein, and others.
  7. Cooperative game theory, which includes bargaining theory. (We only talked about non-cooperative game theory.)
    1. See Thomas Schelling's Strategy of Conflict
  8. Prospect Theory:
    1. This is about the psychology of decision-making, not really game theory (although it is relevant to behavioral game theory and behavioral economics).
    2. Kahneman won the Nobel Prize in Economics for his joint work on this with Tversky.
    3. Here's the gist. We know most people's utility functions exhibit decreasing marginal utility. In regular English, this simply means that each additional unit of something is worth less to us. If you have no money, for example, you might be willing to work for five dollars an hour, but if you are millionaire, that pay wouldn't be worth it to you. (This would mean that your utility function is concave in shape, which would in turn mean that you are risk-averse.) Prospect Theory adds that this only holds when people perceive themselves to be dealing with gains, not losses - when they are not "in the red." When people are in the red (dealing with losses), their utility function will actually be convex (not concave!) which means they'll be risk-seeking, trying to get back to their perceived break-even point. http://en.wikipedia.org/wiki/Prospect_theory
    4. Also, note that marginal utility has not only been confirmed in the lab for gains (as a psychological-empirical fact about people's behavior), but it is also needed from a theoretical point of view to resolve the St. Petersburg Paradox (which of course is an idealization as it assumes infinite resources on part of the casino).
  9. Evolutionary game theory (tutorial on Wednesday)
    1. Brian Skyrms' The Stag Hunt and Social Structure
    2. H. Peyton Young's Individual Strategy and Social Structure

Interested? Please put below ...

Paul H.
Kristen
Saleha
Frederic
Alex Healing
Aaron
nathan menke
Brian Lawler
Mollie Poynton Heather
rafal
Monika
Olaf Bochmann