Shallow neural networks as Wasserstein gradient flows: Difference between revisions

From Optimal Transport Wiki
Jump to navigation Jump to search
No edit summary
 
(60 intermediate revisions by the same user not shown)
Line 9: Line 9:
==Single Layer Neural Networks==
==Single Layer Neural Networks==


{{See also [https://en.wikipedia.org/wiki/Mathematics_of_artificial_neural_networks Mathematics of Artificial Neural Networks]}}
{See also [https://en.wikipedia.org/wiki/Mathematics_of_artificial_neural_networks Mathematics of Artificial Neural Networks]}


explain, add figure


===Discrete Formulation===
===Discrete Formulation===
Line 38: Line 37:
:<math> F(\mu) : = \frac{1}{2} \int_{D} (f - f_\mu)^2 dx </math>.
:<math> F(\mu) : = \frac{1}{2} \int_{D} (f - f_\mu)^2 dx </math>.


Here <math>\Phi(\xi, x)</math> is an activation function with parameter <math> \xi = (\omega, \theta) \in \Omega </math>.  
Here <math>\Phi(\xi, x)</math> is an activation function with parameter <math> \xi = (\omega, \theta) \in \Omega </math>. We will in fact restrict choices of <math> \mu </math> to probability measures with finite second moment, denoted <math> \mathcal{P}_2(\Omega) </math>. This is a small technicality to ensure that the Wasserstein metric is indeed a metric.  


Note that by restricting choices of <math> \mu </math> to probability measures of the form <math> \mu_N = \frac{1}{N} \sum_{i=1}^{N} \delta_{\xi_i} </math>, the above minimization problem generalizes to case with finitely many neurons as well.  
Note that by restricting choices of <math> \mu </math> to probability measures of the form <math> \mu_N = \frac{1}{N} \sum_{i=1}^{N} \delta_{\xi_i} </math>, the above minimization problem generalizes to case with finitely many neurons as well.  
Line 46: Line 45:
:<math> F(\mu) : = \frac{1}{2} \int_{D} (f - (\int_{\Omega} \Phi(\xi, x) d \mu (\xi)))^2 dx  +  \int_{\Omega} V(\xi)d\mu(\xi)</math>
:<math> F(\mu) : = \frac{1}{2} \int_{D} (f - (\int_{\Omega} \Phi(\xi, x) d \mu (\xi)))^2 dx  +  \int_{\Omega} V(\xi)d\mu(\xi)</math>


for a convex potential function <math> V: \Omega \rightarrow \mathbb{R} </math>. Often we choose <math> V(\xi) = \frac{\lambda}{2} |\xi|^2 </math>. In fact, <math> F </math> is convex, in contrast to the minimization function in the finite neuron case.  
for a convex potential function <math> V: \Omega \rightarrow \mathbb{R} </math>. Often we choose <math> V(\xi) = \frac{\lambda}{2} |\xi|^2 </math>. In fact, <math> F </math> is convex (along linear interpolations), in contrast to the minimization function in the finite neuron case.


==Wasserstein Gradient Flow==
==Gradient Flow==


<ref name="Schiebinger"/>  
When <math> X \subseteq \mathbb{R}^n </math>, the gradient flow of a differentiable function <math> g: X \rightarrow \mathbb{R} </math> starting at a point <math> x_0 </math> is a curve <math>x(t): [0, T) \rightarrow X </math> satisfying the differential equation


When <math> X \subseteq \mathbb{R}^n </math>, the gradient flow of a differentiable function <math> f: X \rightarrow \mathbb{R} </math> starting at a point <math> x_0 </math> is a curve <math>x(t): [0, T) \rightarrow X </math> satisfying the differential equation
:<math> \frac{d}{dt}x(t) = - \nabla g (x(t)), x(0)=x_0 </math>.


:<math> \frac{d}{dt}x(t) = - \nabla f (x(t)), x(0)=x_0 </math>.
where <math> \nabla g</math> is the gradient of g.<ref name="Schiebinger"/>


where <math> \nabla f</math> is the gradient of f. Crucially, the gradient flow heads in the direction that decreases the value of <math> f </math> the fastest. We would like to use this nice property of gradient flow in our setting with the functional  <math> F </math>. However, it is not immediately straightforward how to do this, since <math> F </math> is defined on the space of probability measures, rather than on <math> \mathbb{R}^n </math>, so the usual gradient is not defined.  
Crucially, the gradient flow heads in the direction that decreases the value of <math> g </math> the fastest. We would like to use this nice property of gradient flow in our setting with the functional  <math> F </math>. However, it is not immediately straightforward how to do this, since <math> F </math> is defined on the space of probability measures, rather than on <math> \mathbb{R}^n </math>, so the usual gradient is not defined. Before we generalize the notion of gradient flow, recall that <math> \nabla g </math> is the unique function from <math> \mathbb{R}^n </math> to <math> \mathbb{R}^n </math> such that


In order to define the Wasserstein gradient flow for our minimization problem, we will use the notion of a subdifferential.
:<math>\langle\nabla g , v \rangle = D_v g </math>,


===Wasserstein Distance===
where <math> D_v g </math> is the directional derivative of <math> g </math> in the direction <math> v </math>. Motivated by this and using the [http://34.106.105.83/wiki/Formal_Riemannian_Structure_of_the_Wasserstein_metric Riemannian structure of the space of probability measures], we can define the a notion of gradient for our loss functional <math> F </math>.
Let us first define the (pth) Wasserstein distance between two probability measures. <ref name="Ambrosio"/> This can be defined for probability measures in any separable metric space. Let <math> \mu, \nu\in \mathcal{P}(\Omega) </math> and let <math> \Gamma(\mu, \nu) </math> denote the space of transport plans from <math> \mu </math> to <math> \nu </math>. Define the pth Wasserstein distance from <math> \mu </math> to <math> \nu </math> to be:


:<math> W_p(\mu,\nu) := \min \{ \int_{\Omega \times \Omega} d(\xi_1,\xi_2)^p d\gamma(\xi_1,\xi_2): \gamma \in \Gamma(\mu,\nu)\} </math>
Note that one can define [http://34.106.105.83/wiki/Gradient_flows_in_Hilbert_spaces gradient flows in a general Hilbert space].


where <math> d(\xi_1,\xi_2) </math> is the distance between  <math> \xi_1 </math> and <math> \xi_2 </math> in the metric space <math> \Omega \times \Omega </math>. In this context, <math> \Omega \times \Omega </math> is just a subset of <math> \mathbb{R}^{d} </math> for some <math> d </math> with the euclidean metric. Often <math> p = 2 </math>.
===Wasserstein Gradient / Subdifferential ===


===First Variation===
We are looking for an element <math> \nabla_{W_2} F (\mu) </math> in the tangent space of <math>\mathcal{P}_2(\Omega) </math> at <math> \mu</math> such that


The first variation of a functional <math> F: \mathcal{P}(\Omega) \rightarrow \mathbb{R} </math> is the unique
:<math> \langle  \nabla_{W_2} F (\mu_*), \frac{d}{dt}\mu(t)|_{t=0} \rangle_{W_2} = \lim_{h \rightarrow 0} \frac{F(\mu(h)) - F(\mu_*)}{h}</math>


===Wasserstein Subdifferential===
for any absolutely continuous curve <math> \mu(t) </math> in <math>\mathcal{P}_2(\Omega) </math> with <math> \mu(0)=\mu_* </math>.


==Main Results==
We claim that in fact


===Consistency Between Infinite and Finite Cases===
:<math> \nabla_{W_2} F (\mu_*) = - \nabla \cdot (\mu_* \nabla \frac{\delta F}{\delta \mu_*}) </math>


where <math> \frac{\delta F}{\delta \mu_*} </math> is the first variation of <math> F </math> at <math> \mu_* </math> <ref name="Ambrosio" /> (also called the [https://en.wikipedia.org/wiki/Functional_derivative#cite_note-ParrYangP246A.2-3 functional derivative]).
We provide a formal argument for this equality that makes the most sense when <math> \mu </math> is absolutely continuous with respect to the Lebesgue measure. In this case, we can think of <math> \mu </math> as a density function with <math> d \mu(x) = \mu(x) dx </math>. We can further assume that <math> \mu </math> is in <math> L^2(\Omega) </math>. Therefore the same notion of gradient for <math> L^2 </math> exists for <math> F </math>, and in fact
:<math> \nabla_{L^2}F(\mu) = \frac{\delta F}{\delta \mu} </math>
Thus by definition of <math> L^2 </math> gradient and <math> L^2 </math> inner product, we should have
:<math>\lim_{h \rightarrow 0} \frac{F(\mu(h)) - F(\mu_*)}{h}= \langle \frac{\delta F}{\delta \mu_*} , \frac{d}{dt}\mu(t)|_{t=0} \rangle_{L^2}</math>
:<math> = \int \frac{\delta F}{\delta \mu_*} \frac{d}{dt}\mu(t)|_{t=0} dx </math>
by the continuity equation for <math>\mu(t) </math> and divergence theorem,
:<math> \int \frac{\delta F}{\delta \mu_*} \frac{d}{dt}\mu(t)|_{t=0} dx = - \int \frac{\delta F}{\delta \mu_*} (\nabla \cdot (v_*\mu_*)) dx= \int \nabla \frac{\delta F}{\delta \mu_*} (v_*\mu_*)dx = \int \nabla \frac{\delta F}{\delta \mu_*} v_* d\mu_*</math>
where <math> v_* </math> is the unique velocity vector corresponding to <math> \mu_* </math> by the equivalence of absolutely continuous curves and solutions of [http://34.106.105.83/wiki/The_continuity_equation_and_Benamour_Brenier_formula the continuity equation]. Now by the definition of the inner product structure of <math> (\mathcal{P}_2(\mathbb{R}^d), W_2)</math>, we have
:<math>\int \nabla \frac{\delta F}{\delta \mu_*} v_* d\mu_* = \langle - \nabla \cdot (\mu \nabla \frac{\delta F}{\delta \mu_*}),  \frac{d}{dt}\mu(t)|_{t=0} \rangle_{W_2}.</math>
Therefore we have
:<math>\nabla_{W_2}F(\mu_*)= - \nabla \cdot (\mu \nabla \frac{\delta F}{\delta \mu_*}).</math>
so the gradient flow for <math> F </math> is an absolutely continuous curve of probability measure <math> \mu_t </math> such that
:<math> \partial_t \mu_t  = \nabla \cdot (\mu_t \nabla \frac{\delta F}{\delta \mu_t}) </math>.
Now we actually compute this gradient flow for our loss functional <math> F</math>.
===Computing the Wasserstein Gradient Flow for F ===
First, we compute the first variation ( or [https://en.wikipedia.org/wiki/Functional_derivative functional derivative]) of a function from <math>\mathcal{P}(\Omega) </math> to <math> \mathbb{R} </math> of <math> F </math>
:<math> \frac{\delta F}{\delta \mu}(\mu_*) = \int_D \Phi( \cdot, x) [\int_\Omega \Phi(\bar{\xi},x)d \mu_*(\bar{xi}) - f(x) ]dx + V </math>
Therefore we have
: <math> \nabla \frac{\delta F}{\delta \mu}(\mu_*) = \int_D \nabla_{\xi} \Phi( \cdot, x) [\int_\Omega \Phi(\bar{\xi},x)d \mu_*(\bar{xi}) - f(x) ]dx + \nabla V .</math>
Hence the  Wasserstein gradient flow is
:<math> \partial_t \mu_t = \rm{div}( \mu_t \nabla \frac{\delta F}{\delta \mu} (\mu_t)) </math>.
==Open Problems==
===Generalization===
[https://en.wikipedia.org/wiki/Generalization_error Generalization error] is how well a model generalizes for a new data not in your training data set. In practice, gradient descent does generalize really well, but it is an open problem to provide a theoretical framework to guarantee generalization. Wang, Meng, Chen and Liu 2021 made progress on studying the implicit regularization of the algorithm, and provide a framework for convergence. <ref name="Wang" />


==References==
==References==
Line 85: Line 132:


<ref name="Schiebinger"> [https://personal.math.ubc.ca/~geoff/courses/W2019T1/Lecture16.pdf G. Schiebinger, ''Gradient Flow in Wasserstein Space''] </ref>
<ref name="Schiebinger"> [https://personal.math.ubc.ca/~geoff/courses/W2019T1/Lecture16.pdf G. Schiebinger, ''Gradient Flow in Wasserstein Space''] </ref>
<ref name="Wang"> [https://arxiv.org/pdf/2012.06244.pdf B. Wang, Q. Meng, W. Chen, T. Liu,  ''The Implicit Regularization for Adaptive Optimization Algorithms on Homogeneous Neural Networks''] </ref>


</references>
</references>

Latest revision as of 02:33, 20 March 2022


Motivation

Artificial neural networks (ANNs) consist of layers of artificial "neurons" which take in information from the previous layer and output information to neurons in the next layer. Gradient descent is a common method for updating the weights of each neuron based on training data. While in practice every layer of a neural network has only finitely many neurons, it is beneficial to consider a neural network layer with infinitely many neurons, for the sake of developing a theory that explains how ANNs work. In particular, from this viewpoint the process of updating the neuron weights for a shallow neural network can be described by a Wasserstein gradient flow.


Single Layer Neural Networks

{See also Mathematics of Artificial Neural Networks}


Discrete Formulation

Let us introduce the mathematical framework and notation for a neural network with a single hidden layer.[1] Let be open . The set represents the space of inputs into the network. There is some unknown function which we would like to approximate. Let be the number of neurons in the hidden layer. Define

be given by

where is a fixed activation function and is a space of possible parameters . The goal is to use training data to repeatedly update the weights and based on how close is to the function . More concretely, we want to find that minimizes the loss function:

A standard way to choose and update the weights is to start with a random choice of weights and perform gradient descent on these parameters. Unfortunately, this problem is non-convex, so the minimizer may not be achieved. It turns out in practice, neural networks are surprisingly good at finding the minimizer. A nicer minimization problem that may provide insight into how neural networks work is a neural network model with infinitely many neurons.


Continuous Formulation

For the continuous formulation (i.e. when ), we rephrase the above mathematical framework. In this case, it no longer makes sense to look for weights that minimize the loss function. We instead look for a probability measure such that

minimizes the loss function:

.

Here is an activation function with parameter . We will in fact restrict choices of to probability measures with finite second moment, denoted . This is a small technicality to ensure that the Wasserstein metric is indeed a metric.

Note that by restricting choices of to probability measures of the form , the above minimization problem generalizes to case with finitely many neurons as well.

To avoid overfitting the network to the training data, a potential term is added the loss function. For the remainder of this article, we define the loss function to be:

for a convex potential function . Often we choose . In fact, is convex (along linear interpolations), in contrast to the minimization function in the finite neuron case.

Gradient Flow

When , the gradient flow of a differentiable function starting at a point is a curve satisfying the differential equation

.

where is the gradient of g.[2]

Crucially, the gradient flow heads in the direction that decreases the value of the fastest. We would like to use this nice property of gradient flow in our setting with the functional . However, it is not immediately straightforward how to do this, since is defined on the space of probability measures, rather than on , so the usual gradient is not defined. Before we generalize the notion of gradient flow, recall that is the unique function from to such that

,

where is the directional derivative of in the direction . Motivated by this and using the Riemannian structure of the space of probability measures, we can define the a notion of gradient for our loss functional .

Note that one can define gradient flows in a general Hilbert space.

Wasserstein Gradient / Subdifferential

We are looking for an element in the tangent space of at such that

for any absolutely continuous curve in with .

We claim that in fact

where is the first variation of at [3] (also called the functional derivative).

We provide a formal argument for this equality that makes the most sense when is absolutely continuous with respect to the Lebesgue measure. In this case, we can think of as a density function with . We can further assume that is in . Therefore the same notion of gradient for exists for , and in fact

Thus by definition of gradient and inner product, we should have

by the continuity equation for and divergence theorem,

where is the unique velocity vector corresponding to by the equivalence of absolutely continuous curves and solutions of the continuity equation. Now by the definition of the inner product structure of , we have

Therefore we have

so the gradient flow for is an absolutely continuous curve of probability measure such that

.

Now we actually compute this gradient flow for our loss functional .

Computing the Wasserstein Gradient Flow for F

First, we compute the first variation ( or functional derivative) of a function from to of

Therefore we have

Hence the Wasserstein gradient flow is

.

Open Problems

Generalization

Generalization error is how well a model generalizes for a new data not in your training data set. In practice, gradient descent does generalize really well, but it is an open problem to provide a theoretical framework to guarantee generalization. Wang, Meng, Chen and Liu 2021 made progress on studying the implicit regularization of the algorithm, and provide a framework for convergence. [4]

References