I am a PhD student in the Healthcare Robotics Lab in the Computer Science and Engineering department at the University of California San Diego. I work under the direction of Dr. Laurel Riek. My research lies in the intersection of computer vision, robotics, and artifical intelligence. My work aims to design algorithms that enable robots to interact and work with groups of people in real world environments. I am also an National Science Foundation Graduate Research Fellow, Arthur J. Schmitt Presidential Fellow, GEM Fellow, and Google Anita Borg Memorial Scholar.
As robots enter human-occupied environments, it is important that they work effectively with groups of people. To achieve this goal, robots need the ability to detect groups. This ability requires robots to have ego-centric (robot-centric) perception because placing external sensors in the environment is impractical. Additionally, robots need learning algorithms which do not require extensive training, as a priori knowledge of an environment is difficult to acquire. We introduce a new algorithm that addresses these needs. It detects moving groups in real-world, ego-centric RGB-D data from a mobile robot. Also, it uses unsupervised learning which leverages the underlying structure of the data for learning. This work will enable robots to work with human teams in the general public using their own onboard sensors with minimal training.
As robots enter people's homes, it is important that they are able to effectively accomplish tasks that people perform everyday. For the RoboCup@Home 2017 challenge, we designed an algoithm that enabled a Toyota Human Support Robot to transport groceries from a table to a cupboard. Each shelf of the cupboard had a different category of objects (e.g. bottles, can, etc); therefore, we had to ensure that the objects were placed on the correct shelf. We used Simultaneous Localization and Mapping (SLAM) to enable the HSR to autonomously navigate from the table to the cupboard. Additionally, we used a state-of-the-art object detection algorithm, YOLO, to detect the different types of objects.
As robots become more integrated into human spaces, it is important that they move around autonomously during face-to-face interaction with humans as this facilitates fluent interaction. Although humans naturally and simultaneously interact with other humans and are able to to join various groups of conversation in a social environment, robots are not able to do this. This has motivated us to design algorithms that allow robots to autonomously join human groups and adapt its behavior to the group.
High fidelity robotic visual perception is at the brink of realization, but is not yet fully attainable due to computational resource constraints of current mobile computing platforms. Increased resolution will enhance robotic vision with the ability to detect small and far away objects, which will allow them to sense the world with greater detail, as a human would. We argue that high fidelity perception is necessary to enable robots the ability to dynamically adapt to naturalistic environments. In this work, we introduce a method that borrows ideas from human visual perception to give robots a better sense of where to look, to alleviate hardware resource constraints. Our method is designed as an abstraction to work with any object or pedestrian detection algorithm with little modification to the original algorithm, provided that the input is in the form of an RGB-D image. We compare our method to a HOG-based pedestrian detector on a high-definition dataset, to show that our algorithm achieves up to 100% faster computation time, without sacrificing significant detection accuracy.
I am thankful to have the support of the National Science Foundation Graduate Research Fellowship (NSF GRFP), Arthur J. Schmitt Presidential Fellowship, GEM National Consortium, and the Google Anita Borg Memorial Scholarship