Numerical and Optimization

1. Dual Method

  1. Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
  2. Primal-dual Subgradient Methods for Convex Problems
  3. Stochastic Optimization and Sparse Statistical Recovery: Optimal Algorithms for High Dimensions

2. Generalization

  1. Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability

3. Matrix Completion

  1. Inference and Uncertainty Quantification for Noisy Matrix Completion
  2. Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent

4. Numerical Computation

  1. The Evaluation of Integrals of the form : Application to Logistic-Normal Models
  2. Faster Eigenvector Computation via Shift-and-Invert Preconditioning

5. Optimization

  1. A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method
  2. SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
  3. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction
  4. Semi-Stochastic Gradient Descent Methods
  5. Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
  6. Spider: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
  7. How to Escape Saddle Point Efficiently?
  8. Adaptive Sampling Strategies for Stochastic Optimization
  9. Implicit regularization in nonconvex statistical estimation- Gradient descent converges linearly for phase retrieval, matrix completion and blind deconvolution
  10. Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth Non-Convex Optimization
  11. Stochastic subgradient method converges at the rate O(k −1/4 ) on weakly convex functions
  12. Online Non-Convex Learning- Following the Perturbed Leader is Optimal

6. Phase Retrieval/Single Index Model

  1. Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval
  2. Phase Retrieval via Randomized Kaczmarz: Theoretical Guarantees
  3. Phase Retrieval via Wirtinger Flow: Theory and Algorithms
  4. Efficient Learning of Generalized Linear and Single Index Models

7. Principal Component Analysis

  1. Fast and Simple PCA via Convex Optimization
  2. Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation
  3. Faster Eigenvector Computation via Shift-and-Invert Preconditioning
  4. LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain
  5. Fast Stochastic Algorithms for SVD and PCA- Convergence Properties and Convexity

8. Second Order Method

  1. First-Order Methods for Nonconvex Quadratic Minimization
  2. Newton-type methods for non-convex optimization under inexact Hessian information
  3. Second-Order Stochastic Optimization for Machine Learning in Linear Time

9. SGD Analysis

  1. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning
  2. Optimal Rates for Zero-order Convex Optimization: the Power of Two Function Evaluations
  3. Train Faster, Generalize Better: Stability of Stochastic Gradient Descent
  4. Stability Analysis of SGD Through the Normalized Loss Function

10. Stochastic Gradient Langevin Dynamics

  1. Non-convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis
  2. Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability
  3. A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics
  4. On Stationary-Point Hitting Time and Ergodicity of Stochastic Gradient Langevin Dynamics

11. Other SGD Variants

  1. SignSGD: Compressed Optimisation for Non-Convex Problems
  2. SignSGD via Zeroth-Order Oracle

results matching ""

    No results matching ""