Shallow neural networks as Wasserstein gradient flows: Difference between revisions

From Optimal Transport Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 14: Line 14:
: <math> F_N(x, \omega_1, \dots, \omega_N,\theta_1, \dots, \theta_N) = \frac{1}{N} \sum_{i=1}^N \omega_i h(\theta_i,x) </math>  
: <math> F_N(x, \omega_1, \dots, \omega_N,\theta_1, \dots, \theta_N) = \frac{1}{N} \sum_{i=1}^N \omega_i h(\theta_i,x) </math>  


where <math> h </math> is a fixed [https://en.wikipedia.org/wiki/Activation_function activation function] and <math> \Omega </math> is a space of possible parameters <math> (\omega, \theta) </math>. The goal is to use training data to repeatedly update the weights <math> \omega_i </math> and <math>\theta_i </math> based on how close <math> f_{N, \omega, \theta} := F_N( \cdot,  \omega_1, \dots, \omega_N,\theta_1, \dots, \theta_N) </math> is to the function <math> f </math>. More concretely, we want to find  <math> \omega, \theta </math> that minimizes the loss function:
where <math> h </math> is a fixed [https://en.wikipedia.org/wiki/Activation_function activation function] and <math> \Omega </math> is a space of possible parameters <math> (\omega, \theta)= (\omega_1, \dots, \omega_N,\theta_1, \dots, \theta_N) </math>. The goal is to use training data to repeatedly update the weights <math> \omega_i </math> and <math>\theta_i </math> based on how close <math> f_{N, \omega, \theta} := F_N( \cdot,  \omega_1, \dots, \omega_N,\theta_1, \dots, \theta_N) </math> is to the function <math> f </math>. More concretely, we want to find  <math> \omega, \theta </math> that minimizes the loss function:


: <math> l(f,f_{N, \omega, \theta}) := \frac{1}{2} \int_{D} |f(x)-f_{N,\omega,\theta}(x)|^2dx </math>
: <math> l(f,f_{N, \omega, \theta}) := \frac{1}{2} \int_{D} |f(x)-f_{N,\omega,\theta}(x)|^2dx </math>

Revision as of 19:15, 10 February 2022

[1]

Artificial neural networks (ANNs) consist of layers of artificial "neurons" which take in information from the previous layer and output information to neurons in the next layer. Gradient descent is a common method for updating the weights of each neuron based on training data. While in practice every layer of a neural network has only finitely many neurons, it is beneficial to consider a neural network layer with infinitely many neurons, for the sake of developing a theory that explains how ANNs work. In particular, from this viewpoint the process of updating the neuron weights for a shallow neural network can be described by a Wasserstein gradient flow.

Motivation

Shallow Neural Networks

Let us introduce the mathematical framework and notation for a neural network with a single hidden layer. Let be open . The set represents the space of inputs into the network. There is some unknown function which we would like to approximate. Let be the number of neurons in the hidden layer. Define

be given by

where is a fixed activation function and is a space of possible parameters . The goal is to use training data to repeatedly update the weights and based on how close is to the function . More concretely, we want to find that minimizes the loss function:

A standard way to choose an update the weights is to start with a random choice of weights and perform gradient descent on these parameters. Unfortunately, this problem is in general non-convex, so the minimizer may not be achieved with this method. To avoid this issue, it is useful to instead study a neural network model with infinitely many neurons.


Continuous Formulation

For the continuous formulation (i.e. when ), we rephrase the above mathematical framework. In this case, it no longer makes sense to look for weights that minimize the loss function. We instead look for a probability measure such that

minimizes the loss function:

.

Minimization Problem

Wasserstein Gradient Flow

Main Results

Consistency Between Infinite and Finite Cases

References