Papers:

Here are some papers you can choose from. You are welcome to pick any other recent paper on the theory of optimization, but please check with me to see if it is suitable for this class. If you were enrolled in CSE291 in Fall 2015 and presented a paper on non-convex optimization, this time I expect you to choose a different paper. You can also present more than one paper so long as they are on the same topic and your presentation forms a coherent story.

Alternating Minimization

  • Praneeth Netrapalli, Prateek Jain, Sujay Sanghavi. Low-rank Matrix Completion using Alternating Minimization. STOC 2013.

  • Praneeth Netrapalli, U N Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain. Non-convex Robust PCA. NIPS 2013.

Tensors

  • Yining Wang and Animashree Anandkumar. Online and Differentially Private Tensor Decomposition. NIPS 2016

  • Kejun Huang, Nicholas Sidiropoulos, Athanas Liavas. A Flexible and Efficient Algorithmic Framework for Matrix and Tensor Factorization. Arxiv: 1506.04209

  • Mikhail Belkin, Luis Rademacher and James Voss. Basis Learning as an Algorithmic Primitive. COLT 2016

  • Boaz Barak and Ankur Moitra. Noisy Tensor Completion via the Sum-of-Squares Hierarchy. COLT 2016

Stochastic Methods

  • Prateek Jain, Chi Jin, Sham Kakade, Praneeth Netrapalli and Aaron Sidford. Matching Matrix Bernstein with Little Memory: Near-Optimal Finite Sample Guarantees for Oja’s Algorithm. COLT 2016

  • Maria-Florina Balcan, Simon Du, Yining Wang and Adams Wei Yu. An Improved Gap-Dependency Analysis of the Noisy Power Method. COLT 2016

  • Sashank Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola. Fast Stochastic Methods for Nonsmooth Nonconvex Optimization. NIPS 2016

  • Sashank Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola. Stochastic Frank-Wolfe Methods for Nonconvex Optimization. Allerton 2016

  • Sashank Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, Alexander J. Smola. Stochastic variance reduction for nonconvex optimization. ICML 2016

  • Zeyuan Allen-Zhu. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. Arxiv.

Miscellaneous Non-Convex Optimization

  • Jason Lee, Max Simchowitz, Benjamin Recht and Michael Jordan. Gradient Descent only Converges to Minimizers. COLT 2016

  • Animashree Anandkumar and Rong Ge. Efficient approaches for escaping higher order saddle points in non-convex optimization. COLT 2016

  • Francis Bach and Vianney Perchet. Highly-Smooth Zero-th Order Online Optimization. COLT 2016

  • Hongyi Zhang and Suvrit Sra. First-order Methods for Geodesically Convex Optimization. COLT 2016.

  • Zeyuan Allen-Zhu, Elad Hazan. Optimal Black-Box Reductions Between Optimization Objectives. NIPS 2016.

  • Blake Woodworth and Nati Srebro. Tight Complexity Bounds for Optimizing Composite Objectives. Arxiv.

  • Elad Hazan, Kfir Levy and Shai-Shalev Shwartz. On Graduated Optimization for Stochastic Non-Convex Problems. ICML 2016