Sinkhorn's Algorithm: Difference between revisions
(Changed "let's" to "lets") |
|||
Line 14: | Line 14: | ||
==Discrete Problem Formulation== | ==Discrete Problem Formulation== | ||
To apply Sinkhorn's algorithm to approximate <math> L^\epsilon_c(\alpha,\beta)</math>, it will be necessary to assume finite support so let <math> \alpha = \textstyle\sum_{i=1}^n a_i \delta_{x_i} </math> and <math> \beta = \textstyle\sum_{j=1}^m b_i \delta_{y_j} </math> and denote the corresponding vector of weights by <math> \mathbf{a}\in\mathbb R_+^n </math> and <math> \mathbf{b}\in\mathbb R_+^m </math>. Additionally let <math>C_{ij} = c(x_i, y_j) </math> and denote the discrete version of <math> \Gamma(\alpha,\beta) </math> by <math> U(a,b)=\{ P\in\mathbb R^{n\times m} \mid \textstyle\sum_j P_{ij}=a_i, \textstyle\sum_i P_{ij}=b_j \} </math>. This | To apply Sinkhorn's algorithm to approximate <math> L^\epsilon_c(\alpha,\beta)</math>, it will be necessary to assume finite support so let <math> \alpha = \textstyle\sum_{i=1}^n a_i \delta_{x_i} </math> and <math> \beta = \textstyle\sum_{j=1}^m b_i \delta_{y_j} </math> and denote the corresponding vector of weights by <math> \mathbf{a}\in\mathbb R_+^n </math> and <math> \mathbf{b}\in\mathbb R_+^m </math>. Additionally let <math>C_{ij} = c(x_i, y_j) </math> and denote the discrete version of <math> \Gamma(\alpha,\beta) </math> by <math> U(a,b)=\{ P\in\mathbb R^{n\times m} \mid \textstyle\sum_j P_{ij}=a_i, \textstyle\sum_i P_{ij}=b_j \} </math>. This lets us write the entropic Kantorovich problem as | ||
:<math> L^\epsilon_c(\mathbf{a},\mathbf{b}) = \inf_{P\in U(\mathbf{a},\mathbf{b})} \sum_{i,j} C_{ij} P_{ij} + \epsilon \operatorname{KL}(P\mid \mathbf{a}\mathbf{b}^T) </math> | :<math> L^\epsilon_c(\mathbf{a},\mathbf{b}) = \inf_{P\in U(\mathbf{a},\mathbf{b})} \sum_{i,j} C_{ij} P_{ij} + \epsilon \operatorname{KL}(P\mid \mathbf{a}\mathbf{b}^T) </math> | ||
Line 21: | Line 21: | ||
:<math> \operatorname{KL}(P\mid \mathbf{a}\mathbf{b}^T) = \sum_{i,j} P_{ij} \log\left(\frac{P_{ij}}{a_i b_j}\right) + a_i b_j - P_{i,j} </math> | :<math> \operatorname{KL}(P\mid \mathbf{a}\mathbf{b}^T) = \sum_{i,j} P_{ij} \log\left(\frac{P_{ij}}{a_i b_j}\right) + a_i b_j - P_{i,j} </math> | ||
==Characterizing the Solution== | ==Characterizing the Solution== |
Revision as of 10:43, 31 May 2020
Sinkhorn's Algorithm is an iterative numerical method used to obtain an optimal transport plan for the Kantorovich problem with entropic regularization in the case of finitely supported positive measures .
Continuous Problem Formulation
Entropic regularization modifies the Kantorovich problem by adding a Kullback-Leibler divergence term to the optimization goal. Specifically, the general form of the problem is now to determine
where is the product measure of and , and where
whenever the Radon-Nikodym derivative exists (i.e. when is absolutely continuous w.r.t. ) and otherwise. This form of the KL divergence is applicable even when differ in total mass and it reduces to the standard definition whenever and have equal total mass. From this definition it immediately follows that for an optimal coupling must be absolutely continuous w.r.t . As a result, the optimal plan is in some sense less singular and hence "smoothed out."
Discrete Problem Formulation
To apply Sinkhorn's algorithm to approximate , it will be necessary to assume finite support so let and and denote the corresponding vector of weights by and . Additionally let and denote the discrete version of by . This lets us write the entropic Kantorovich problem as
where
Characterizing the Solution
The solution to the discrete problem formulation is unique and has a special form.
- Theorem
- The solution to discrete regularized Kantorovich problem is unique and has the form for some where . Moreover, and are unique up to multiplication and division by some scaling factor.
Sinkhorn's Algorithm
Sinkhorn's algorithm takes advantage of the aforementioned characterization result to iteratively approximate the scaling factors and . The procedure is simple and only involves matrix-vector multiplication and entrywise division as follows
Once a sufficient number of iterations have been taken, we let be our approximation of the optimal plan.