Research:



My research interests are in the areas of autonomy, human-robot interaction, and healthcare engineering. I explore ways to enable robots to solve problems in dynamic, complex human environments, such as hospitals and homes, and to effectively work with people. This work is enabled this through algorithm design, robot building, and ecological experimentation.

I am particularly interested in addressing real-world problems in healthcare. I have worked in areas including emergency medicine, intensive care, neurorehabilitation, geriatrics, and medical education.

I am grateful to the NSF, AFOSR, DOE, IBM, The Luce Foundation, Adobe, and Amazon for supporting this work.


New Projects (2017)


  • Robots for Healthy Aging, Wellness, and Neurorehabilitation (2017 - 2022)

    The use of robots in healthcare represents an exciting opportunity to help a large number of people. Robots can be used to enable people with cognitive, sensory, and motor impairments, help people who are ill or injured, support caregivers, and aid the clinical workforce. Our lab is exploring a series of projects in this domain to explore how robots can provide long-term, adaptive, personalized support to people undergoing neurorehabilitation for TBI, PTSD, and stroke, as well as encouraging wellness as people age.

    More details coming soon! In the meanwhile, read:
  • Riek, L.D. (2017). "Healthcare Robotics". Communictions of the ACM, pp. 1-8. In press [pdf]

  • Riek, L.D. "Robotics Technology in Mental Healthcare". In D. Luxton (Ed.), Artificial Intelligence in Behavioral Health and Mental Health Care. Elsevier, 2015. pp. 185-203. doi: 10.1016/B978-0-12-420248-1.00008-8 [pdf]

  • NSF: NRI: Coordinating Human-Robot Teams in Uncertain Environments (2017 - 2020)

    The decreasing cost and increasing sophistication of robot hardware is creating new opportunities for teams of robots to be deployed in combination with skilled humans to support and augment labor-intensive and/or dangerous manual work. The vision is for robots to free up time of skilled workers so they can focus on the tasks that they are skilled at (complex problem solving, dextrous manipulation, customer service, etc.) and robots can help with the distracting and frustrating parts of working, such as delivering materials or fetching supplies. This vision is being realized across many sectors of the US economy and abroad, such as in warehouse management, assembly manufacturing, and disaster response. However, progress in this area is being stymied by current methods that are rigid and inflexible, and rely on unrealistic models of human-robot interaction. This project seeks to overcome these problems by proposing new models and methods for teams robots to coordinate with teams humans to complete complex problems.
    The solution methods developed in this project will allow the robots to reason about the uncertainty about the domain and their human teammates, while optimizing their behavior. The methods are broadly applicable to human-robot collaboration domains, but they will be evaluated in an emergency department, an environment with a large amount of uncertainty and many delivery and supply tasks during high-volume times. A team of robots can assist in these tasks. Experiments will take place in simulation and in the UC San Diego Simulation and Training Center with various numbers of humans and robots. The results of this project have the potential to transform the way human-robot coordination is performed.

    More information can be found here:

  • NSF: Smart Factories -An Intelligent Material Delivery System to Improve Human-Robot Workflow and Productivity in Assembly Manufacturing (2017-2020)

    Manufacturing represents a quarter of all employment in the US. To reshore jobs, improve operations, and recruit, retain, and retrain skilled workers, companies are increasingly using robotics technology. Ideally, robots will not replace humans but team with them to improve productivity. However, most industrial robots are poorly integrated into human workflow, causing expensive work stoppage problems ($1.7M per hour), worker stress, and talent loss. The research goal of this project is to address this problem by designing novel methods to improve human-robot workflow and productivity in assembly manufacturing through the use of an intelligent material delivery system (IMDS), which will closely integrate with and support the manual work process. This project will investigate innovative, multi-disciplinary approaches to this research area, dramatically advancing the state-of-the art in smart manufacturing and human-centered robotics.

  • AFOSR: Trust Affordances in Human Automation Teaming (2017-2020):

    The goal of this project is to explore the foundational computational and physical design constraints that facilitate robot trustworthiness. It is rooted in questions regarding the processes of trust building and trust calibration within high-risk, human-robot teaming and will involve physical human-robot experimentation, such as collaborating on a time-sensitive, safety-critical tasks, like cooperative manipulation to safely to move a heavy object. The project will enable robots to learn to adapt to and anticipate human motion and alter their own behaviors to become a safe, competent, and trustworthy teammate. Results of this project will substantially inform the science of trust between humans and autonomous systems, including providing new methods for mutual capability assessment and adaptation, informing future design guidelines for trusted automation affordances and improved transparency, and offering new insights for mixed-initiative team training.



    Robot Perception of Context


    Human environments are dynamic, and always in flux. People constantly re-purpose spaces, rearrange objects, and alter their behaviors. Human perception is robust in the face of these changes, and takes a context-based approach to watch, learn, and adapt dynamically to change. However, most machine perception is content-based and ignores important perceptual clues; is only successful within known, static environments; and tends to report artificially high success rates across biased data sets. This significantly limits robots' perceptual autonomy when operating in real time in human environments, making them inflexible when faced with noise and change.

    My group has been working to address this gap on several fronts. First, we designed a model of context that is computationally fast and robust, and incorporates multidisciplinary inspiration from neuroscience and entomology (O'Connor and Riek, 2015). This first effort included a successful validation of context perception across a noisy Youtube dataset of human activities that mimic real-world operating conditions for robots (e.g., frequent occlusion, great variations in lighting and sound levels). We then built on this work, to enable a mobile robot to automatically perceive context across noisy, busy locations on campus, and use that to inform its behaviors around people (O'Connor, Hayes, and Riek, 2014; Nigam and Riek, 2015).

    Recently, we have developed new methods for generating object proposals (Chan, Taylor, and Riek, 2017), which enable robots to quickly perceive their enviornment. We are applying this work to enable robots to sense and partner with human teams in real time (Taylor and Riek, 2016).



    Selected publications:


    Team Coordination Dynamics: Perception and Synthesis


    Humans and other animals can perceive the dynamics of local motion to synchronize their behavior to others. These coordinated behaviors afford great evolutionary advantages - animals can protect against predators, catch larger prey, and solve more complex problems than when working alone. What is even more remarkable is that this extends far beyond the animal kingdom; everything from galactic tides to ionic channelling in cells exhibit synchronous behavior. Thus, many scientists, from mathematicians and biologists to cognitive scientists and economists are exploring the phenomena of synchronization, and establishing a foundation of how to model and predict it within their respective fields.

    I explore modeling this phenomena within humans teams as well as design new algorithms to inform how robots can synthesize their behavior to cooperate with humans. My group's first work in this area was to design a new, non-linear method to model the emergence of group entrainment (Iqbal and Riek, IEEE TAC 2015), which we experimentally validated with both human-human and human-robot teams. We have since built anticipation algorithms for robots to sense this emergence in real time and coordinate their activity with people. (Iqbal and Riek, ICMI 2015; Iqbal, Rack, and Riek, IEEE T-RO).

    Selected publications:


    Intelligent Health Technology


    At least 400,000 people die every year in US hospitals due to fully preventable medical errors. This is the third leading cause of death in our country, and a major public health crisis. The majority of those deaths are due to poorly-designed technology that completely fails to support a culture of safety, and communication problems between clinicians, patients, and other stakeholders. My group focuses on helping address these problems by building on the success of the aforementioned robotics projects, and designing technology across a range of healthcare settings from the operating room to the bedside.

    One project, supported by the NSF CAREER award, involves the design of the next generation of robotic human patient simulator systems (HPS). These are life-sized robots which train clinicians to safely treat patients. They are the most commonly used android robot in America: over 180,000 doctors, nurses, and combat medics train on them annually. However, all commercial HPS systems are completely facially inexpressive, which destroys clinician immersion and understanding of critical patient cues. To address this, my group is designing a new, 21-DOF expressive robotic head capable of conveying actual patient expressions of stroke, neurological impairment, and pain, and capable of sensing and interacting with clinical learners (Moosaei et al., 2017; Moosaei, Gonzales, and Riek, 2014; Moosaei, Hayes, and Riek, 2015) In addition to helping advance research in robot expression synthesis, this project also employs novel manufacturing techniques to create a low-cost system that integrates with existing HPS systems, which is in the process of being licensed to industry (Riek, 2015).

    On another project, my group is designing new tools to address another major patient safety challenge: resuscitation. Similar to the aforementioned coordination dynamics, well-coordinated resuscitation teams are more likely to save lives. This project introduces an intelligent, interactive tool which enables clinicians to quickly coordinate their practice, keep track of all drugs they have administered, and more effectively administer CPR. This tool was tested in both urban and regional pediatric intensive care units; and in rural emergency departments (Gonales, Cheung, and Riek, 2015; Gonzales et al, 2016 ).



    Selected publications:




    Robot Non-Verbal Behavior: Synthesis and Understanding


    In addition to understanding people, robots also need to be communicate with them. While sometimes this is via verbal communication, other communication modalities can be leveraged both to rapidly communicate important information and to engage with people across a range of abilities and backgrounds. These modalities include gesture, gaze, posture, and non-verbal utterances.

    For the past few years, my group has been exploring both how to facilitate this synthesis on robots, as well as how to study how humans perceive, understand, and respond to these behaviors. With humanoid robots, we have engaged in several projects with that will inform the design of future robot behaviors: 1) Cooperative gestures. People are more quick to process to "robot-like" gestures than "human-like" gestures. Also, people who have difficulty processing human gestures also have difficulty processing robot gestures (Riek et al., 2010). 2) Co-speech gestures. We adapted a Stroop-like task where we paired robot and human gestures with congruent or incongruent spoke descriptors, and varied the voice (human or robot). We found that automatic processing of gestures occurs with human actors, but not robot actors. This suggests some dissonance in the brain when perceiving robot behavior (Hayes, Crowell, and Riek, 2013).

    Recently, we have explored how robots can be trained to perceive human behavioral metrics in learning from demonstration settings, as well as synthesize them, in order to facilitate implicit learning (Hayes 2015, Hayes and Riek 2014). We plan to build on this work to design learning policies that enable more robust robot learning from and communication with people.



    Selected publications:




    Robot Ethics


    At just about every robotics conference I've attended for the past 18 years, the question of ethics has arisen. Roboticists ponder these questions deeply, as we make systems that can do physical things in the physical world. Now that robots are starting to be around every day people, a whole new range of ethical, legal, and normative issues begin to arise.

    While autonomous ethical reasoning for robots is likely an NP-hard problem (Riek 2013), designing ethical guidelines for the robotics community is quite possible. I have been fortunate to have amazing collaborators in philosophy, law, and cognitive science, and have started to explore these questions in more depth (Riek and Howard, 2014; Riek et al., 2015; IEEE 2017).

    In terms of research in this area, I am particularly interested in how we can employ ethical-centered design practices to empower people with disabilities to co-design future assistive technology systems. I am also very interested in general professional ethics for the community, tech privacy, and law.



    Selected publications: