# DeepMind Uses Nash Equilibrium To Solve ML Problems

The most common method of teaching AI systems to perform tasks is training on examples. The process continues until the system is properly trained and errors are minimized. However, it is a lonely endeavor.

Humans learn from interactions. Scientists have found that the same applies to machines as well. AI Research Lab DeepMind has already trained AI agents to capture the flag and reach the Grandmaster level at Starcraft. Drawing on these experiences, DeepMind introduced a game theory-modeled approach to help solve fundamental machine learning problems.

Principal Component Analysis (PCA) is a dimensionality reduction technique for reducing the size of large data sets without losing most of the original information. For this research, the DeepMind team reformulated a competitive multi-agent game called EigenGame.

**Main components analysis**

PCA burst onto the scene in the early 1900s and is a long-standing technique for processing large-dimensional data. Over the years, the technique has become the first step in the data processing pipeline to aggregate and visualize data. It is useful for regression and classification tasks on low dimensional representations.

Even after a century later, the technique remains relevant and has become an important area of research for mainly two main reasons:

- With the increase in the amount of data available, PCA has become a computational bottleneck. To improve the scaling of PCA, researchers used random algorithms to fully exploit advancements in computing focused on deep learning. However, research is still ongoing for better optimization.
- Since PCA shares common solutions with several machine learning and engineering problems, it has become an important research area for the development of information and algorithms that apply widely to branches of the ML tree. .

**EigenGame**

DeepMind recently introduced a new multi-agent perspective to PCA (this is traditionally a single agent problem) that provides a way to scale massive datasets that were previously computationally demanding. Presented at ICLR 2021, this approach was outlined in an article titled “EigenGame: PCA as Nash’s equilibrium“.

In this approach, the DeepMind team used eigenvectors to design the game. Eigenvectors capture the critical variance of the data and are orthogonal to each other. In EigenGame, each player controls an eigenvector. To earn points, players must explain the variance in the data. On the other hand, a player will be penalized if he is too closely aligned with the other players. This means that if Player 1 is maximizing their variance, the other players should be careful to minimize their alignment with the players above them in the hierarchy. The combination of rewards and penalties ultimately defines a player’s usefulness.

With properly designed variance and alignment terms determined in EigenGame, the researchers were able to show:

- If all the players play optimally, together they can achieve the game’s Nash balance, which is the PCA solution. Nash’s equilibrium is a decision-making theorem in game theory, named after a mathematician John Forbes Nash Jr. It states that a player is supposed to know the balance strategy of other players. No player wins anything by changing only their strategy.
- The PCA solution can be found if each player uses gradient climb to maximize their utility independently and simultaneously.

The independence property of the gradient rise is important because it allows the calculation to be distributed over several Google Cloud TPUs. This allows for parallelism of data and models, which allows the algorithm to accommodate large-scale data. With EigenGame, researchers were able to find the major components of hundred terabyte datasets containing “millions of entities or billions of rows” in just a few hours.

##### Join our Telegram group. Be part of an engaging online community. Join here.

## Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.