EN FR
EN FR


Section: New Results

Multi-agent learning

In contrast to [9], [3], [24], the above works focus squarely on multi-agent interactions that occur in discrete time (as is typically the case in practical applications). In the case of games with finite action spaces, we showed in [16] that no-regret learning based on "following the regularized leader" converges to Nash equilibrium in potential games, thus complementing the analysis of [15] where it was shown that this family of learning methods eliminates dominated strategies and converges locally to strict Nash equilibria. The former result was extended to mixed-strategy learning in games with continuous action spaces in [11], while [42], [28] established the convergence of no-regret regularized learning to variationally stable equilibria in continuous games, even with imperfect and/or delayed/asynchronous feedback.