Home > Events > Oral Candidacy - Maryam Moosaei

Oral Candidacy - Maryam Moosaei

Start: 4/5/2016 at 2:00PM
End: 4/5/2016 at 5:00PM
Location: 258 Fitzpatrick Hall
Attendees: Faculty and students are welcome to attend the presentation portion of the defense.
Add to calendar:
iCal vCal

Maryam Moosaei

April 5, 2016

2:00 pm

258 Fitzpatrick Hall

Adviser:  Dr. Laurel Riek

Committee Members:  Dr. Patrick Flynn, Dr. Iolanda Leite, Dr. Ayse Saygin, Dr. Bill Smart, Dr. Chaoli Wang

Title

“Using Facially Expressive Robots to Increase Realism in Patient Simulation” 

Abstract:

Robotics as a field is expanding into many new sectors, such as healthcare, entertainment, education, assisted living, rehabilitation, and transportation. As robots find stronger presence in our daily lives, their effect on society increases, which opens up new areas of research, including human robot interaction (HRI), robot ethics, and robot perception.

One area of particular interest is human-robot collaboration (HRC), where humans and robots will be working together in close proximity. Ideally, we would like robots to be capable partners, able to perform tasks independently and to effectively communicate their intentions toward us. To facilitate HRC, it is vital that robots have the ability to convey their intention during interactions with people.

 HRI researchers have explored the domain of expressing robot intention by synthesizing robot behaviors, particularly those expressible through body movements, that are human-like and therefore more readily understandable. While many robots are highly dexterous and can express intention through a wide range of body movements, one noticeable limitation is that there are some subtle cues that cannot easily be expressed without at least some facial features, such as confusion, frustration, boredom, and attention.

 Indeed, the human face is a rich spontaneous channel for the communication of social and emotional displays and serves an important role in human communication. Thus, robot behavior that includes at least some rudimentary, human-like facial expressions can enrich the interaction between humans and robots and add to a robot's ability to convey intention.  Robots able to convey facial expressions can be used in several domains.

 In my work, I focus on a specific medical application of facially expressive robots: patient simulation. Practicing with patient simulators before treating real patients is very common in the United States for learning different clinical skills, such as communication, assessment of patient condition, and procedural practice. Despite the importance of patients' facial expressions in making diagnostic decisions, commercially available patient simulators are not equipped with an expressive face. My work focuses on filling this technology gap by applying our robotics knowledge to make patient simulators more realistic.

 This proposal outlines my work on filling the technology gap in patient simulation, The following research has been undertaken. First, we developed a method for evaluating different facial expression synthesis algorithms on robots and virtual avatars to ensure that the expressions synthesized using our algorithms are perceived as intended. Second, we have been building our own bespoke robotic head with 16 degrees of freedom on its face, able to convey a wide range of facial expressions and pathologies. Third, we developed an ROS-based module for performing facial expression synthesis on a wide range of robots and virtual avatars, and this module is the software of our bespoke robotic head. Fourth, while finishing the hardware of our robot, we began to test our algorithms in simulation with an avatar as well as another humanoid robot. Fifth, we are developing computational models of different pathologies, such as Bell's palsy and stroke, and we will use these computational models to develop a shared control system for our bespoke robotic head.