You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Several backends for easy use with Pytorch, Jax, Tensorflow, Numpy and Cupy arrays.
32
+
33
+
### Implemented Features
34
+
35
+
POT provides the following generic OT solvers:
20
36
21
37
*[OT Network Simplex solver](https://pythonot.github.io/auto_examples/plot_OT_1D.html) for the linear program/ Earth Movers Distance [1] .
22
38
*[Conditional gradient](https://pythonot.github.io/auto_examples/plot_optim_OTreg.html)[6] and [Generalized conditional gradient](https://pythonot.github.io/auto_examples/plot_optim_OTreg.html) for regularized OT [7].
23
39
* Entropic regularization OT solver with [Sinkhorn Knopp
*[Smooth optimal transport solvers](https://pythonot.github.io/auto_examples/plot_OT_1D_smooth.html) (dual and semi-dual) for KL and squared L2 regularizations [17].
46
+
* Smooth optimal transport solvers (dual and semi-dual) for KL and squared L2 regularizations [17].
32
47
* Weak OT solver between empirical distributions [39]
33
48
* Non regularized [Wasserstein barycenters [16]](https://pythonot.github.io/auto_examples/barycenters/plot_barycenter_lp_vs_entropic.html) with LP solver (only small scale).
34
49
*[Gromov-Wasserstein distances](https://pythonot.github.io/auto_examples/gromov/plot_gromov.html) and [GW barycenters](https://pythonot.github.io/auto_examples/gromov/plot_gromov_barycenter.html) (exact [13] and regularized [12,51]), differentiable using gradients from Graph Dictionary Learning [38]
@@ -42,18 +57,21 @@ POT provides the following generic OT solvers (links to examples):
42
57
*[One dimensional Unbalanced OT](https://pythonot.github.io/auto_examples/unbalanced-partial/plot_UOT_1D.html) with KL relaxation and [barycenter](https://pythonot.github.io/auto_examples/unbalanced-partial/plot_UOT_barycenter_1D.html)[10, 25]. Also [exact unbalanced OT](https://pythonot.github.io/auto_examples/unbalanced-partial/plot_unbalanced_ot.html) with KL and quadratic regularization and the [regularization path of UOT](https://pythonot.github.io/auto_examples/unbalanced-partial/plot_regpath.html)[41]
43
58
*[Partial Wasserstein and Gromov-Wasserstein](https://pythonot.github.io/auto_examples/unbalanced-partial/plot_partial_wass_and_gromov.html) and [Partial Fused Gromov-Wasserstein](https://pythonot.github.io/auto_examples/gromov/plot_partial_fgw.html) (exact [29] and entropic [3] formulations).
44
59
*[Sliced Wasserstein](https://pythonot.github.io/auto_examples/sliced-wasserstein/plot_variance.html)[31, 32] and Max-sliced Wasserstein [35] that can be used for gradient flows [36].
45
-
*[Wasserstein distance on the circle](https://pythonot.github.io/auto_examples/plot_compute_wasserstein_circle.html)[44, 45]
*[Efficient Discrete Multi Marginal Optimal Transport Regularization](https://pythonot.github.io/auto_examples/others/plot_demd_gradient_minimize.html)[50].
51
67
*[Several backends](https://pythonot.github.io/quickstart.html#solving-ot-with-multiple-backends) for easy use of POT with [Pytorch](https://pytorch.org/)/[jax](https://github.com/google/jax)/[Numpy](https://numpy.org/)/[Cupy](https://cupy.dev/)/[Tensorflow](https://www.tensorflow.org/) arrays.
52
68
*[Smooth Strongly Convex Nearest Brenier Potentials](https://pythonot.github.io/auto_examples/others/plot_SSNB.html#sphx-glr-auto-examples-others-plot-ssnb-py)[58], with an extension to bounding potentials using [59].
53
-
*[Gaussian Mixture Model OT](https://pythonot.github.io/auto_examples/others/plot_GMMOT_plan.html#sphx-glr-auto-examples-others-plot-gmmot-plan-py)[69].
69
+
*[Gaussian Mixture Model OT](https://pythonot.github.io/auto_examples/gaussian_gmm/plot_GMMOT_plan.html#sphx-glr-auto-examples-others-plot-gmmot-plan-py)[69].
54
70
*[Co-Optimal Transport](https://pythonot.github.io/auto_examples/others/plot_COOT.html)[49] and
*[Optimal Transport Barycenters for Generic Costs](https://pythonot.github.io/auto_examples/barycenters/plot_free_support_barycenter_generic_cost.html)[77]
74
+
*[Barycenters between Gaussian Mixture Models](https://pythonot.github.io/auto_examples/barycenters/plot_gmm_barycenter.html)[69, 77]
57
75
58
76
POT provides the following Machine Learning related solvers:
59
77
@@ -173,6 +191,12 @@ import ot
173
191
```python
174
192
# a,b are 1D histograms (sum to 1 and positive)
175
193
# M is the ground cost matrix
194
+
195
+
# With the unified API :
196
+
Wd = ot.solve(M, a, b).value # exact linear program
197
+
Wd_reg = ot.solve(M, a, b, reg=reg).value # entropic regularized OT
198
+
199
+
# With the old API :
176
200
Wd = ot.emd2(a, b, M) # exact linear program
177
201
Wd_reg = ot.sinkhorn2(a, b, M, reg) # entropic regularized OT
178
202
# if b is a matrix compute all distances to a and return a vector
@@ -183,10 +207,29 @@ Wd_reg = ot.sinkhorn2(a, b, M, reg) # entropic regularized OT
183
207
```python
184
208
# a,b are 1D histograms (sum to 1 and positive)
185
209
# M is the ground cost matrix
210
+
211
+
# With the unified API :
212
+
T = ot.solve(M, a, b).plan # exact linear program
213
+
T_reg = ot.solve(M, a, b, reg=reg).plan # entropic regularized OT
214
+
215
+
# With the old API :
186
216
T = ot.emd(a, b, M) # exact linear program
187
217
T_reg = ot.sinkhorn(a, b, M, reg) # entropic regularized OT
188
218
```
189
219
220
+
* Compute OT on empirical distributions
221
+
222
+
```python
223
+
# X and Y are two 2D arrays of shape (n_samples, n_features)
224
+
225
+
# with squared euclidean metric
226
+
T = ot.solve_sample(X, Y).plan # exact linear program
227
+
T_reg = ot.solve_sample(X, Y, reg=reg).plan # entropic regularized OT
POT has benefited from the financing or manpower from the following partners:
215
262
@@ -336,7 +383,7 @@ Dictionary Learning](https://arxiv.org/pdf/2102.06555.pdf), International Confer
336
383
337
384
[50] Liu, T., Puigcerver, J., & Blondel, M. (2023). [Sparsity-constrained optimal transport](https://openreview.net/forum?id=yHY9NbQJ5BP). Proceedings of the Eleventh International Conference on Learning Representations (ICLR).
338
385
339
-
[51] Xu, H., Luo, D., Zha, H., & Duke, L. C. (2019). [Gromov-wasserstein learning for graph matching and node embedding](http://proceedings.mlr.press/v97/xu19b.html). In International Conference on Machine Learning (ICML), 2019.
386
+
[51] Xu, H., Luo, D., Zha, H., & Carin, L. (2019). [Gromov-wasserstein learning for graph matching and node embedding](http://proceedings.mlr.press/v97/xu19b.html). In International Conference on Machine Learning (ICML), 2019.
340
387
341
388
[52] Collas, A., Vayer, T., Flamary, F., & Breloy, A. (2023). [Entropic Wasserstein Component Analysis](https://arxiv.org/abs/2303.05119). ArXiv.
342
389
@@ -389,3 +436,18 @@ Artificial Intelligence.
389
436
[74] Chewi, S., Maunu, T., Rigollet, P., & Stromme, A. J. (2020). [Gradient descent algorithms for Bures-Wasserstein barycenters](https://proceedings.mlr.press/v125/chewi20a.html). In Conference on Learning Theory (pp. 1276-1304). PMLR.
390
437
391
438
[75] Altschuler, J., Chewi, S., Gerber, P. R., & Stromme, A. (2021). [Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent](https://papers.neurips.cc/paper_files/paper/2021/hash/b9acb4ae6121c941324b2b1d3fac5c30-Abstract.html). Advances in Neural Information Processing Systems, 34, 22132-22145.
439
+
440
+
[76] Chapel, L., Tavenard, R. (2025). [One for all and all for one: Efficient computation of partial Wasserstein distances on the line](https://iclr.cc/virtual/2025/poster/28547). In International Conference on Learning Representations.
441
+
442
+
[77] Tanguy, Eloi and Delon, Julie and Gozlan, Nathaël (2024). [Computing Barycentres of Measures for Generic Transport Costs](https://arxiv.org/abs/2501.04016). arXiv preprint 2501.04016 (2024)
443
+
444
+
[78] Martin, R. D., Medri, I., Bai, Y., Liu, X., Yan, K., Rohde, G. K., & Kolouri, S. (2024). [LCOT: Linear Circular Optimal Transport](https://openreview.net/forum?id=49z97Y9lMq). International Conference on Learning Representations.
445
+
446
+
[79] Liu, X., Bai, Y., Martín, R. D., Shi, K., Shahbazi, A., Landman, B. A., Chang, C., & Kolouri, S. (2025). [Linear Spherical Sliced Optimal Transport: A Fast Metric for Comparing Spherical Data](https://openreview.net/forum?id=fgUFZAxywx). International Conference on Learning Representations.
447
+
448
+
[80] Altschuler, J., Bach, F., Rudi, A., Niles-Weed, J., [Massively scalable Sinkhorn distances via the Nyström method](https://proceedings.neurips.cc/paper_files/paper/2019/file/f55cadb97eaff2ba1980e001b0bd9842-Paper.pdf), Advances in Neural Information Processing Systems, 2019.
449
+
450
+
[81] Xu, H., Luo, D., & Carin, L. (2019). [Scalable Gromov-Wasserstein learning for graph partitioning and matching](https://proceedings.neurips.cc/paper/2019/hash/6e62a992c676f611616097dbea8ea030-Abstract.html). Neural Information Processing Systems (NeurIPS).
0 commit comments