There are different approaches to model a large collection of interacting particles or agents. One such approach is based on a microscopical viewpoint, where one wants to determine the attributes (such as position, velocity, etc.) of each individual particle at any given time. A possible way to mathematically study such an approach would be by a system of coupled ordinary/stochastic differential equations, having these attributes as unknowns. However, implementing numerical solvers that are based on this approach is in general quite costly, and many times is even impossible, given the huge number (billions and billions) of unknowns. A macroscopic approach, that mainly arose in the framework of statistical physics, uses a socalled 'mean field perspective'. This in general aims to describe the behaviour of the particles via the time evolution of their density (i.e. as a 'cloud'). Such models typically lead to partial differential equations for such macroscopic quantities as unknowns.
Since their introduction (initiated around 2006, independently by CainesHuangMalhamé, and LasryLions), the theory of mean field games have found multiple applications both in pure and applied mathematics. It studies strategic decision making in large populations where the individual agents interact via certain meanfield quantities (through the density, velocities, controls, etc. of the other agents). It provides powerful tools for applications ranging from quantum mechanics to biodiversity ecology and it has already had a significant impact on models in social sciences, macroeconomics, stock markets, risk management and wealth distribution, and on biological systems. This theory has its roots in the mean field theory from statistical physics, however, in mean field games the agents would like to find optimal strategies. This is somehow in contrast with physical models, where particles are typically governed by the laws of nature. So, in mean field games, the general goal is to find and characterize Nashtype equilibrium configurations.
The master equation in mean field games was introduced by P.L. Lions and it represents the heart of the theory. This is an infinite dimensional nonlocal HamiltonJacobiBellman equation on the space of probability measures and it encodes the Nash equilibria in mean field games. Among others, it serves as a powerful tool to prove and quantify the mean field limit and propagation of chaos for stochastic games when the number of agents tends to infinity, which is important in applications as well as of great interest theoretically. The question of solvability of the master equation initiated an important programme and outstanding open problems in the field. By the nature of this equation, it is expected that in general classical solutions will break down in finite time. So, its solvability was established either for short time horizon or under special structural conditions on the data that satisfy the socalled LasryLions monotonicity condition. Also, most of the results in the literature use the regularisation effect of a nondegenerate idiosyncratic noise.
In this proposal, we will study degenerate master equations in the lack of such a regularisation effect or structural assumptions imposed by the LasryLions monotonicity condition. Instead, we will rely on the socalled displacement monotonicity condition, that stems from the notion of displacement convexity arising in the theory of optimal transport. This condition allows us to investigate a general class of degenerate models (that are sometimes closer to real life applications), by a unified way. Roughly speaking, displacement monotonicity helps also to restore important regularity properties, which were lost in the lack of the nondegenerate noise. This proposal includes both purely deterministic models and models subject to common noise.
