mtelgars at cs dot ucsd dot edu
2007 - [?]. PhD in Computer Science, UCSD. Advisor: Sanjoy Dasgupta.
2004 - 2007. BS in Computer Science & Discrete Math, CMU.
2001 - 2003. Diploma in Violin Performance, Juilliard.
Boosting with the Logistic Loss is Consistent.
- COLT 2013.
- Optimization, generalization, and consistency guarantees for AdaBoost with
logistic and similar losses.
Margins, Shrinkage, and Boosting.
- ICML 2013.
- AdaBoost, with a variety of losses, attains optimal margins
merely by multiplying the step size with a small constant.
Agglomerative Bregman Clustering. (With Sanjoy Dasgupta.)
- ICML 2012.
- Provides the natural algorithm,
with attention to: handling degenerate clusters via smoothing,
Bregman divergences for nondifferentiable convex functions,
Exponential Families without minimality assumptions.
A Primal-Dual Convergence Analysis of Boosting.
- JMLR 13:561-606, 2012.
- This is the extended version of the NIPS paper "The Fast Convergence of Boosting".
Steepest Descent Analysis for Unregularized Linear
Prediction with Strictly Convex Penalties.
- NIPS Optimization Workshop 2011.
- Adaptation of some of the boosting techniques to other optimization problems,
for instance gradient descent of positive semi-definite quadratics.
The Fast Convergence of Boosting.
- NIPS 2011.
- AdaBoost, with a variety of losses, minimizes its empirical risk
at rate \(\mathcal O(\ln(1/\epsilon))\) when either weak learnable or possessing a
minimizer, and rate \(\mathcal O(1/\epsilon)\) in general.
Hartigan's Method: \(k\)-means without Voronoi. (With Andrea Vattani.)
[old javscript demo]
- AISTATS 2010.
- Hartigan's method minimizes \(k\)-means cost point by point; it terminates when
points lie within regions defined by intersections of spheres (rather than just
Signal decomposition using multiscale admixture models. (With John Lafferty.)
Dirichlet draws are sparse with high probability.
Blackwell Approachability and Minimax Theory.
Central Binomial Tail Bounds. (2009.)