Machine Learning
Optimal Transport: Machine Learning
Introduction
Optimal transport concepts applied to machine learning applications can also be referred to as computational Optimal Transport (OT). At its core, machine learning focuses on making comparisons between complex objects. To properly measure these similarities, a metric is needed, which is a distance function.
Optimal transport respects the underlying structure and geometry of a problem while providing a framework for comparing probability distributions. Optimal transport methods have received attention from researchers in fields as varied as economics, statistics, and quantum mechanics. The categories that OT methods can be divided into include learning, domain adaptation, Bayesian inference, and hypothesis testing.
Learning Methods
These methods have used transport-based distances in the following research contexts:
Graph-based semi-supervised learning: Effective approach for classification from a large variety of domains. These include image and text classification. It is possible to use graph-based algorithms, and is often useful for unlabeled data.
Generative Adversarial Networks (GAN): Machine learning frameworks where two neural networks are used compete in a game-theoretic sense. These techniques have been used in semi-supervised learning.
Restricted Bolzman Machines (RMB): These are probabilistic graphical models and can obtain hierarchical features at multiple levels. An RBM can learn a probability distribution over a given set of inputs, and they were originally created under the name Harmonim by Paul Smolensky in 1986.
Entropy-regularized Wasserstein loss: This has been used for multi-label classification. It is characterized by a relaxation of the transport problem which addresses unnormalized measure. It does this be replacing the equality constraints with soft penalties with respect to KL- divergence. Slice-Wasserstein metric
Wasserstein GAN (WGAN): Uses a minimization of the distance between data distribution contained in the training set and the distribution of the observed data. In certain cases this produces a more stable training process.
WGAN Pseudocode:
Domain Adaptation: In this case the goal is to learn about or extrapolate from one domain to another, often by finding domain-invariant representations (County). https://arxiv.org/pdf/1507.00504.pdf This is a technique that is often used to transfer information based on labelled data to unlabeled data.
By obtaining the best transportation plan connecting the probability distributions of source and target domains, estimates of learning samples are estimated. The transformation is non-linear and invertible. This allows for the use of a variety of machine learning methods that can be used on the transformed dataset. Regularized, unsupervised models have been used, as well as Joint Class Proportion and Optimal Transport (JCPOT) to address multi-source domain adaptation
Algorithm 1 Joint Class Proportion and Optimal (JCPOT)
1: Parameters: maxIter, 2: , 3: 4: while < maxIter and > threshold do 5: 6: 7: 8: 9: