mtelgars at cs dot ucsd dot edu
2014 - (?). Postdoc in EECS, University of Michigan.
Host: Jake Abernethy.
2013 - 2014. Consultant at MSR NYC. Host: John Langford.
2013 - 2014. Postdoc in Statistics, Rutgers University.
Host: Tong Zhang.
2007 - 2013. PhD in Computer Science, UCSD. Advisor: Sanjoy Dasgupta.
2004 - 2007. BS in Computer Science & Discrete Math, CMU.
2001 - 2003. Diploma in Violin Performance, Juilliard.
Scalable Nonlinear Learning with Adaptive Polynomial Expansions.
(With Alekh Agarwal, Alina Beygelzimer, Daniel Hsu, and John Langford.)
- NIPS 2014.
- SGD variant which greedily grows monomial features of increasing
Tensor decompositions for learning latent variable models.
(With Anima Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade.)
- JMLR 15:2773-2832, 2014.
- Analysis of tensor power iteration for tensors which have an orthogonal
decomposition (plus some slight noise), and examples of latent variable
models which can be written in this way (e.g., LDA).
Moment-based Uniform Deviation Bounds for \(k\)-means and Friends.
(With Sanjoy Dasgupta.)
- NIPS 2013.
- Generalization bounds for \(k\)-means cost and
Gaussian mixture log-likelihood for unbounded parameter sets
and distributions with a few bounded moments
(but with no further boundedness assumptions).
Boosting with the Logistic Loss is Consistent.
- COLT 2013.
- Optimization, generalization, and consistency guarantees for AdaBoost with
logistic and similar losses.
Margins, Shrinkage, and Boosting.
- ICML 2013.
- AdaBoost, with a variety of losses, attains optimal margins
merely by multiplying the step size with a small constant.
Agglomerative Bregman Clustering. (With Sanjoy Dasgupta.)
- ICML 2012.
- Provides the natural algorithm,
with attention to: handling degenerate clusters via smoothing,
Bregman divergences for nondifferentiable convex functions,
Exponential Families without minimality assumptions.
A Primal-Dual Convergence Analysis of Boosting.
- JMLR 13:561-606, 2012.
- This is the extended version of the NIPS paper "The Fast Convergence of Boosting".
Steepest Descent Analysis for Unregularized Linear
Prediction with Strictly Convex Penalties.
- NIPS Optimization Workshop 2011.
- Adaptation of some of the boosting techniques to other optimization problems,
for instance gradient descent of positive semi-definite quadratics.
The Fast Convergence of Boosting.
- NIPS 2011.
- AdaBoost, with a variety of losses, minimizes its empirical risk
at rate \(\mathcal O(\ln(1/\epsilon))\) when either weak learnable or possessing a
minimizer, and rate \(\mathcal O(1/\epsilon)\) in general.
Hartigan's Method: \(k\)-means without Voronoi. (With Andrea Vattani.)
- AISTATS 2010.
- Hartigan's method minimizes \(k\)-means cost point by point; it terminates when
points lie within regions defined by intersections of spheres (rather than just
Signal decomposition using multiscale admixture models. (With John Lafferty.)
Dirichlet draws are sparse with high probability.
Blackwell Approachability and Minimax Theory.
Central Binomial Tail Bounds. (2009.)
Duality and Data Dependence in Boosting.
- Committee: Sanjoy Dasgupta (chair/advisor),
- Contains work from NIPS 2011, JMLR 2012, and COLT 2013 as above.
- The results of chapter 5 (whose proofs have some minor errors)
appeared later in a vastly expanded form within
"Convex risk minimization and conditional probability estimation" above.