2 layer neural networks as Wasserstein gradient flows: Difference between revisions

From Optimal Transport Wiki
Jump to navigation Jump to search
(Created page with "The Monge Problem <ref name="Figalli" /> is a problem in [http://34.106.105.83/wiki/Main_Page Optimal Transport] concerning the best way to rearrange mass. It was the earliest...")
 
No edit summary
Line 1: Line 1:
The Monge Problem <ref name="Figalli" /> is a problem in [http://34.106.105.83/wiki/Main_Page Optimal Transport] concerning the best way to rearrange mass. It was the earliest formulation of a problem in Optimal Transport and was later generalized to the [[Kantorovich Problem]]. Unlike the Kantorovich Problem, which allows the splitting of mass, the Monge Problem asks for the most efficient allocation map that doesn't assign any mass from any source to more than one location.
<ref name="Figalli" />  


==Motivation==
==Motivation==

Revision as of 01:17, 10 February 2022

[1]

Motivation

Shallow Neural Networks

Continuous Formulation

Minimization Problem

Wasserstein Gradient Flow

Main Results

References

  1. [https://people.math.ethz.ch/~afigalli/lecture-notes-pdf/The-continuous-formulation-of-shallow-neural-networks-as-wasserstein-type-gradient-flows.pdf XAVIER FERNANDEZ-REAL AND ALESSIO FIGALLI, THE CONTINUOUS FORMULATION OF SHALLOW NEURAL NETWORKS AS WASSERSTEIN-TYPE GRADIENT FLOWS]