Projects

Research Themes and Interests

In my research, I am interested in creating robotic systems that are able to safely and efficiently work around and with humans. My approach toward accomplishing this goal is through designing and evaluating techniques for robots to better understand and predict human behavior. I believe that the successful inference of human intent and proper modeling of future human behavior will enable planning systems to be more robust and adaptive to the dynamic environments created by interacting with humans. Below are brief summaries of my current research and past projects that work toward this goal.

Inferring Others’ Beliefs About You

Currently, I am working on methods for an agent (A1) to infer other agents’ mental models of A1. While frameworks such as I-POMDPs exist for incorporating beliefs about other agents’ beliefs into reward-based planning systems, my goal is to create techniques for observational inference of other agents’ mental models that produce an explicit model of other agents’ expectations of A1. A1 can then use this explicit model to better understand the other agents’ past actions and predict their future actions as they plan around A1. A1 can also use this model to determine actions that will help correct other agents’ mental models by choosing actions that do not fit their current mental models. I believe this work will help create robotic agents that are better able to understand and predict human actions (creating smoother collocated interaction) and better able to restore or calibrate human trust by fixing incorrect human mental models of the robot.

If you are interested in hearing updates on this work or would like to collaborate on related ideas, please contact me!

Active Learning in Shared Autonomy

I have worked on designing shared autonomy systems that apply entropy-minimizing techniques from inverse reinforcement learning to more efficiently learn about operator goals from operator inputs. I created a framework for combining goal-oriented assistance actions with information gathering actions and evaluated this framework in a user study with a physical robot. I plan to further this work by investigating how this framework’s usefulness scales with different types of tasks and task complexity.

Papers:
One paper from this project has been accepted for publication at HRI 2019; once it has been published I will link to it here.

Activity Recognition for Proactive Assistance

This project involved research into using RNN-based activity recognition on generic subtasks that occur during different assembly tasks, such as fastening or searching for parts. I conducted a data collection study in which participants completed an assembly task with the help of a human assistant, then used coded recordings of these studies to train a multimodal RNN to recognize different subtasks. Then, I ran a second study in which participants completed a different assembly task with the help of a robot arm. The robotic system used the trained RNN to recognize similar subtasks to those that occurred in the data collection task, and used this information to preemptively hand over the parts that it predicted would be needed next. I collected and analyzed both objective classification accuracy results and subjective participant feedback about the helpfulness of this preemptive assistant.

Papers: 
C. Brooks, M. Atreya, and D. Szafir. “Proactive Robot Assistants for Freeform Collaborative Tasks
through Multimodal Recognition of Generic Subtasks.” 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018). IEEE, 2018.