Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
- Fast Gradient Methods for Symmetric Nonnegative Matrix Factorization.
- Naive Feature Selection: Sparsity in Naive Bayes.
- Regularity as Regularization: Smooth and Strongly Convex Brenier Potentials in Optimal Transport.
- Ranking and synchronization from pairwise measurements via SVD.
- Polyak Steps for Adaptive Fast Gradient Methods.
- An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums.
- On Lazy Training in Differentiable Programming.
- Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks.
- Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification.
- Fast and Faster Convergence of SGD for Over-Parameterized Models (and an Accelerated Perceptron).
- Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses.
- Efficient online learning with kernels for adversarial large scale problems
- Affine Invariant Covariance Estimation for Heavy-Tailed Distributions
- Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance
- Statistical Estimation of the Poincaré constant and Application to Sampling Multimodal Distributions
- Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions
- Optimal Complexity and Certification of Bregman First-Order Methods
- Efficient First-order Methods for Convex Minimization: a Constructive Approach
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
- Fast Gradient Methods for Symmetric Nonnegative Matrix Factorization.
- Naive Feature Selection: Sparsity in Naive Bayes.
- Regularity as Regularization: Smooth and Strongly Convex Brenier Potentials in Optimal Transport.
- Ranking and synchronization from pairwise measurements via SVD.
- Polyak Steps for Adaptive Fast Gradient Methods.
- An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums.
- On Lazy Training in Differentiable Programming.
- Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks.
- Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification.
- Fast and Faster Convergence of SGD for Over-Parameterized Models (and an Accelerated Perceptron).
- Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses.
- Efficient online learning with kernels for adversarial large scale problems
- Affine Invariant Covariance Estimation for Heavy-Tailed Distributions
- Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance
- Statistical Estimation of the Poincaré constant and Application to Sampling Multimodal Distributions
- Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions
- Optimal Complexity and Certification of Bregman First-Order Methods
- Efficient First-order Methods for Convex Minimization: a Constructive Approach
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography