Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
- On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
- Sharp Analysis of Learning with Discrete Losses
- Gossip of Statistical Observations using Orthogonal Polynomials
- Marginal Weighted Maximum Log-likelihood for Efficient Learning of Perturb-and-Map models
- Slice inverse regression with score functions
- Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling
- Nonlinear Acceleration of Momentum and Primal-Dual Algorithms
- Nonlinear Acceleration of Deep Neural Networks
- Nonlinear Acceleration of CNNs
- Robust Seriation and Applications To Cancer Genomics
- Reconstructing Latent Orderings by Spectral Clustering
- Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees
- Efficient First-order Methods for Convex Minimization: a Constructive Approach
- Operator Splitting Performance Estimation: Tight contraction factors and optimal parameter selection
- Finite-sample Analysis of M-estimators using Self-concordance
- Uniform regret bounds over for the sequential linear regression problem with the square loss
- Efficient online algorithms for fast-rate regret bounds under sparsity.
- Exponential convergence of testing error for stochastic gradient methods
- Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
- Central Limit Theorem for stationary Fleming–Viot particle systems in finite spaces
- SeaRNN: Improved RNN training through Global-Local Losses
- Improved asynchronous parallel optimization analysis for stochastic incremental methods
- Asynchronous optimisation for Machine Learning
- -Regularized Dictionary Learning
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- Relating Leverage Scores and Density using Regularized Christoffel Functions
- Averaging Stochastic Gradient Descent on Riemannian Manifolds
- Localized Structured Prediction
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance
- Learning with SGD and Random Features
- Manifold Structured Prediction
- On Fast Leverage Score Sampling and Optimal Learning
- Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
- On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
- Sharp Analysis of Learning with Discrete Losses
- Gossip of Statistical Observations using Orthogonal Polynomials
- Marginal Weighted Maximum Log-likelihood for Efficient Learning of Perturb-and-Map models
- Slice inverse regression with score functions
- Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling
- Nonlinear Acceleration of Momentum and Primal-Dual Algorithms
- Nonlinear Acceleration of Deep Neural Networks
- Nonlinear Acceleration of CNNs
- Robust Seriation and Applications To Cancer Genomics
- Reconstructing Latent Orderings by Spectral Clustering
- Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees
- Efficient First-order Methods for Convex Minimization: a Constructive Approach
- Operator Splitting Performance Estimation: Tight contraction factors and optimal parameter selection
- Finite-sample Analysis of M-estimators using Self-concordance
- Uniform regret bounds over for the sequential linear regression problem with the square loss
- Efficient online algorithms for fast-rate regret bounds under sparsity.
- Exponential convergence of testing error for stochastic gradient methods
- Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
- Central Limit Theorem for stationary Fleming–Viot particle systems in finite spaces
- SeaRNN: Improved RNN training through Global-Local Losses
- Improved asynchronous parallel optimization analysis for stochastic incremental methods
- Asynchronous optimisation for Machine Learning
- -Regularized Dictionary Learning
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- Relating Leverage Scores and Density using Regularized Christoffel Functions
- Averaging Stochastic Gradient Descent on Riemannian Manifolds
- Localized Structured Prediction
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance
- Learning with SGD and Random Features
- Manifold Structured Prediction
- On Fast Leverage Score Sampling and Optimal Learning
- Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography