Learning in Network Contexts: Experimental Results from Simulations

Amy Greenwald, Eric Friedman, and Scott Shenker

Abstract

This paper describes the results of simulation experiments performed on a suite of learning algorithms. We focus on games in {\em network contexts}. These are contexts in which (1) agents have very limited information about the game; users do not know their own (or any other agent's) payoff function, they merely observe the outcome of their play. (2) Play can be extremely asynchronous; players update their strategies at very different rates. There are many proposed learning algorithms in the literature. We choose a small sampling of such algorithms and use numerical simulation to explore the nature of asymptotic play. In particular, we explore the extent to which the asymptotic play depends on three factors, namely: limited information, asynchronous play, and the degree of responsiveness of the learning algorithm.