Hello, Friend.

Matus Telgarsky

mtelgars at cs dot ucsd dot edu

2014 - (?). Postdoc in EECS, University of Michigan. Host: Jake Abernethy.

2013 - 2014. Consultant at MSR NYC. Host: John Langford.

2013 - 2014. Postdoc in Statistics, Rutgers University. Host: Tong Zhang.

2007 - 2013. PhD in Computer Science, UCSD. Advisor: Sanjoy Dasgupta.

2004 - 2007. BS in Computer Science & Discrete Math, CMU.

2001 - 2003. Diploma in Violin Performance, Juilliard.

Convex Risk Minimization and Conditional Probability Estimation. (With Miroslav Dudík and Robert Schapire.) [arXiv] [short video] Scalable Nonlinear Learning with Adaptive Polynomial Expansions. (With Alekh Agarwal, Alina Beygelzimer, Daniel Hsu, and John Langford.) [arXiv] Tensor decompositions for learning latent variable models. (With Anima Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade.) [arXiv] [jmlr] Moment-based Uniform Deviation Bounds for \(k\)-means and Friends. (With Sanjoy Dasgupta.) [pdf] [arXiv] [poster] Boosting with the Logistic Loss is Consistent. [arXiv] [short video] Margins, Shrinkage, and Boosting. [arXiv] [video] Agglomerative Bregman Clustering. (With Sanjoy Dasgupta.) [pdf] [short video] A Primal-Dual Convergence Analysis of Boosting. [arXiv] [jmlr] Steepest Descent Analysis for Unregularized Linear Prediction with Strictly Convex Penalties. [pdf] [video] The Fast Convergence of Boosting. [pdf] Hartigan's Method: \(k\)-means without Voronoi. (With Andrea Vattani.) [pdf] [old javascript demo] Signal decomposition using multiscale admixture models. (With John Lafferty.)

Representation Benefits of Deep Feedforward Networks. [arXiv]

Dirichlet draws are sparse with high probability. [arXiv]

Blackwell Approachability and Minimax Theory. (2011.) [arXiv]

Central Binomial Tail Bounds. (2009.) [arXiv]

Ph.D. Thesis
Duality and Data Dependence in Boosting. (2013.) [pdf]