Angelique Taylor
Welcome to My Homepage



Angelique Taylor

I am a PhD student in the Robotics, Health, and Communication (RHC) Lab in the Computer Science and Engineering department at the University of California San Diego. I work under the direction of Dr. Laurel Riek. My research lies in the intersection of computer vision, robotics, and artifical intelligence. My work aims to design algorithms for robots to learn how to behave in human spaces via social context. I am also an National Science Foundation Graduate Research Fellow, Arthur J. Schmitt Presidential Fellow, GEM Fellow, and Google Anita Borg Memorial Scholar.

 
Research Concentration

Artifical Intelligence

Autonomous Sytems

Computer Vision

Robot Perception

EDUCATION
 

University of California San Diego
Doctor of Philosophy in Computer Science
Anticipated Graduation: May 2020

University of Missouri-Columbia
Bachelor of Science in Computer Engineering
Minor: Computer Science
May 2015

University of Missouri-Columbia
Bachelor of Science in Electrical Engineering
Minor: Computer Science
May 2015

Saint Louis Community College
Associate of Science in Engineering Science
May 2012

PUBLICATIONS
 
PROJECTS
 
Autonomous Robots Estimating Afilliation from Human Groups.

As robots become more integrated into human spaces, it is important that they move around autonomously during face-to-face interaction with humans as this facilitates fluent interaction. Although humans naturally and simultaneously interact with other humans and are able to to join various groups of conversation in a social environment, robots are not able to do this. This has motivated us to design algorithms that allow robots to autonomously join human groups and adapt its behavior to the group.

A Biologically Inspired Salient Region Filter for High-Fidelity Robotic Perception.

High fidelity robotic visual perception is at the brink of realization, but is not yet fully attainable due to computational resource constraints of current mobile computing platforms. Increased resolution will enhance robotic vision with the ability to detect small and far away objects, which will allow them to sense the world with greater detail, as a human would. We argue that high fidelity perception is necessary to enable robots the ability to dynamically adapt to naturalistic environments. In this work, we introduce a method that borrows ideas from human visual perception to give robots a better sense of where to look, to alleviate hardware resource constraints. Our method is designed as an abstraction to work with any object or pedestrian detection algorithm with little modification to the original algorithm, provided that the input is in the form of an RGB-D image. We compare our method to a HOG-based pedestrian detector on a high-definition dataset, to show that our algorithm achieves up to 100% faster computation time, without sacrificing significant detection accuracy.

ACKNOWLEDGEMENTS
 

Mountain View Mountain View Mountain View
CONTACT
 
You can find me literally anywhere, just push a button and I'm there