X. Wang and S. Dasgupta. An algorithm for L1 nearest neighbor search via monotonic embedding.
*Neural Information Processing Systems (NIPS)*, 2016.

S. Vikram and S. Dasgupta. Interactive Bayesian hierarchical clustering.
*Thirty-Third International Conference on Machine Learning (ICML)*, 2016.

S. Dasgupta. A cost function for similarity-based hierarchical clustering.
*Forty-Eighth ACM Symposium on Theory of Computing (STOC)*, 2016.

S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search.
*Algorithmica*, 72(1): 237-263, 2015.

K. Chaudhuri and S. Dasgupta. Rates of convergence for nearest neighbor classification.
*Neural Information Processing Systems (NIPS)*, 2014.

S. Dasgupta and S. Kpotufe. Optimal rates for k-NN density and mode estimation.
*Neural Information Processing Systems (NIPS)*, 2014.

M. Ackerman and S. Dasgupta. Incremental clustering: the case for extra clusters.
*Neural Information Processing Systems (NIPS)*, 2014.

K. Chaudhuri, S. Dasgupta, S. Kpotufe and U. von Luxburg. Consistent procedures for cluster tree estimation and pruning.
*IEEE Transactions on Information Theory*, 60(12): 7900-7912, 2014.

C. Tosh and S. Dasgupta. Lower bounds for the Gibbs sampler on mixtures of Gaussians.
*Thirty-First International Conference on Machine Learning (ICML)*, 2014.

A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental PCA.
*Neural Information Processing Systems (NIPS)*, 2013.

M. Telgarsky and S. Dasgupta. Moment-based uniform deviation bounds for k-means and friends.
*Neural Information Processing Systems (NIPS)*, 2013.

S. Dasgupta and K. Sinha. Randomized partition trees for exact nearest neighbor search.
*Twenty-Sixth Conference on Learning Theory (COLT)*, 2013.

S. Dasgupta. Consistency of nearest neighbor classification under selective sampling.
*Twenty-Fifth Conference on Learning Theory (COLT)*, 2012.

M. Telgarsky and S. Dasgupta. Agglomerative Bregman clustering.
*Twenty-Ninth International Conference on Machine Learning (ICML)*, 2012.

S. Kpotufe and S. Dasgupta. A tree-based regressor that adapts to intrinsic dimension.
*Journal of Computer and System Sciences*, 78(5): 1496-1515, 2012.

S. Dasgupta. Algorithms for minimally supervised learning.
*Lectures at Institut Henri Poincare*, 2011.

S. Dasgupta. Two faces of active learning.
*Theoretical Computer Science*, 412(19): 1767-1781, 2011.

K. Chaudhuri and S. Dasgupta. Rates of convergence for the cluster tree.

*Neural Information Processing Systems (NIPS)*, 2010.

N.A. Verma, S. Kpotufe, and S. Dasgupta. Which spatial partition trees are adaptive to intrinsic dimension?

* Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI)*, 2009.

S. Dasgupta and Y. Freund. Random projection trees for vector quantization.

*IEEE Transactions on Information Theory*, 55(7), July 2009.

A. Beygelzimer, S. Dasgupta, and J. Langford. Importance-weighted active learning.
*Twenty-Sixth International Conference on Machine Learning (ICML)*, 2009.

S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning.

* Journal of Machine Learning Research*, 10:281-299, 2009.

S. Dasgupta and D.J. Hsu. Hierarchical sampling for active learning.
*Twenty-Fifth International Conference on Machine Learning (ICML)*, 2008.

S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds.

* Fortieth ACM Symposium on Theory of Computing (STOC)*, 2008.

S. Dasgupta. The hardness of k-means clustering.

UCSD Technical Report CS2008-0916, 2008.

S. Dasgupta, D.J. Hsu, and C. Monteleoni. A general agnostic active learning algorithm.
* Neural Information Processing Systems (NIPS)*, 2007.

Y. Freund, S. Dasgupta, M. Kabra, and N. Verma. Learning the structure of manifolds using random projections.

* Neural Information Processing Systems (NIPS)*, 2007.

S. Dasgupta and L.J. Schulman. A probabilistic analysis of EM for mixtures of separated, spherical Gaussians.

* Journal of Machine Learning Research*, 8:203-226, 2007.

L. Cayton and S. Dasgupta. A learning framework for nearest-neighbor search.

* Neural Information Processing Systems (NIPS)*, 2007.

S. Dasgupta and D. Hsu. On-line estimation with the multivariate Gaussian distribution.

*Twentieth Annual Conference on Learning Theory (COLT)*, 2007.

S. Dasgupta, D.J. Hsu, and N. Verma. A concentration theorem for projections.

* Twenty-Second Conference on Uncertainty in Artificial Intelligence (UAI)*, 2006.

L. Cayton and S. Dasgupta. Robust Euclidean embedding.

*Twenty-Third International Conference on Machine Learning (ICML)*, 2006.

S. Dasgupta and P.M. Long. Performance guarantees for hierarchical clustering.
* Journal of Computer and System Sciences*, 70(4):555-569, 2005.

S. Dasgupta. Coarse sample complexity bounds for active learning.
* Neural Information Processing Systems (NIPS)*, 2005.

S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning.
*Eighteenth Annual Conference on Learning Theory (COLT)*, 2005.

T. Batu, S. Dasgupta, R. Kumar, and R. Rubinfeld. The complexity of approximating the entropy.

*SIAM Journal on Computing*, 35(1):132-150, 2005.

S. Dasgupta. Analysis of a greedy active learning strategy.
* Neural Information Processing Systems (NIPS)*, 2004.

S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss.

* Random Structures and Algorithms*, 22(1):60-65, 2003.

D. Kauchak and S. Dasgupta. An incremental improvement procedure for hierarchical clustering.

* Neural Information Processing Systems (NIPS)*, 2003.

S. Dasgupta, W.S. Lee, and P.M. Long. A theoretical analysis of query selection for collaborative filtering.

* Machine Learning*, 51(3):283-298, 2003.

S. Dasgupta and P. M. Long.
Boosting with diverse base classifiers.

* Sixteenth Annual Conference on Learning Theory (COLT)*, 2003.

T. Batu, S. Dasgupta, R. Kumar, and R. Rubinfeld. The complexity of approximating the entropy.
* Thirty-Fourth Annual Symposium on Theory of Computing (STOC)*, 2002.

S. Dasgupta. Performance guarantees for hierarchical clustering.
*Fifteenth Annual Conference on Learning Theory (COLT)*, 2002.

S. Dasgupta, M.L. Littman, and D. McAllester. PAC generalization bounds for co-training.
* Neural Information Processing Systems (NIPS)*, 2001.

M. Collins, S. Dasgupta, and R.E. Schapire. A generalization of principal component analysis to the exponential family.
* Neural Information Processing Systems (NIPS)*, 2001.

D. Precup, R.S. Sutton, and S. Dasgupta.
Off-policy temporal-difference learning with function approximation.

* Eighteenth International Conference on Machine Learning (ICML)*, 2001.

S. Dasgupta. Experiments with random projection.
* Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI)*, 2000.

S. Dasgupta and L.J. Schulman.
A two-round variant of EM for Gaussian mixtures.
* Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI)*, 2000.

S. Dasgupta. Learning probability distributions.

Ph.D. dissertation, University of California at Berkeley, 2000.

S. Dasgupta. Learning mixtures of Gaussians.
* Fortieth Annual IEEE Symposium on Foundations of Computer Science (FOCS)*, 1999.

S. Dasgupta. Learning polytrees.
* Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI)*, 1999.

S. Dasgupta. The sample complexity of learning fixed-structure Bayesian nets.

* Machine Learning*, 29(2-3):165-180, 1997.