Regret-minimizing algorithms are an important class of machine learning algorithms that are closely related to game theoretic equilibria (e.g., Nash equilibria, correlated equilibria). In this work we present a general framework in which we derive a class of ``no-regret'' learning algorithms that converge to a corresponding set of equilibria. We show how to extend these algorithms to the more difficult naive setting. We also extend our framework to the case of convex games, providing the basis for a class of algorithms which can be exponentially faster than previously known convex algorithms. Finally, we apply our framework to the class of extensive-form games to derive the first class of algorithms that learn the set of extensive-form correlated equilibria.