Navigation
 
Home > Events > Oral Candidacy - Nigel Bosch

Oral Candidacy - Nigel Bosch

Start: 4/28/2016 at 1:00PM
End: 4/28/2016 at 3:30PM
Location: 258 Fitzpatrick Hall
Attendees: Faculty and students are welcome to attend the presentation portion of the defense.
Add to calendar:
iCal vCal

Oral Candidacy

Nigel Bosch

April 28, 2016

1:00 pm

258 Fitzpatrick Hall

Adviser:  Dr. Sidney D’Mello

Committee Members:

Dr. Patrick Flynn     Dr. Ron Metoyer     Dr.  Aaron Striegel

Title:
Automatic Face-based Engagement Detection for Education

Abstract

Engagement is complex and multifaceted, yet crucial to learning. One context in which engagement frequently plays an important role is in computerized learning environments. Computerized learning environments can provide a superior learning experience for students by automatically detecting student engagement (and, thus also disengagement), adapting to it, and providing better feedback and evaluations to teachers and students. However, much research is still needed to realize this goal. The most fundamental requirement is effective engagement detection. This proposal provides an overview of several completed studies that utilized facial features to automatically detect student engagement, and proposes new methods to improve results and answer new research questions. Several aspects of engagement detection are discussed, including affective, cognitive, and behavioral components.

Completed work demonstrated effective engagement detection in multiple learning domains and environments. Studies in laboratory environments illustrated the efficacy of several types of facial features extracted for engagement detection, including a novel application of face-based heart rate detection. Engagement was detected in several learning domains for the first time as well, including essay writing, computer programming, and illustrated textbook reading. Each of these domains has its own challenges; for example, textbook reading is non-interactive and unlikely to trigger discriminative facial expressions at predictable times. A computer programming novice, on the other hand, is more likely to display some cognitive and affective components of engagement at predictable times (e.g., confusion or frustration following an unexpected syntax error). Methods were tailored for each of these domains to address their unique challenges. Engagement detection in a computer-enabled school classroom environment was also considered. The classroom environment presents challenges to engagement detection beyond those in laboratory environments, due to data noise introduced by talking, gesturing, and distractions such as cell phones and other students. Engagement detection methods were effective in many instances despite these distractions, and methods were also developed to improve the robustness of engagement detectors in such environments by incorporating data from interaction log-files in addition to facial features.

While completed work primarily concerned affective and behavioral components of education, proposed work focuses on mind wandering (MW), a type of cognitive disengagement in which students’ thoughts drift away from the learning task toward internal self-generated content. A dataset of MW (disengaged) and non-MW (engaged) video clips has been developed based on students’ self-reports of MW, which can be used to answer key questions about engagement and MW detection. Computer vision and machine learning methods will be used to classify video clips based on students’ facial features. Several types of facial features will be extracted, capturing facial muscle movements, upper body movement, temporal dynamics of features, co-occurring facial movements, skin textures, and edges of facial features. Individual models for each feature type will be developed to answer questions of how MW manifests on students’ faces and what methods are most effective for detecting MW. Human annotators will also be compared to automated methods, to determine how well humans recognize MW in other people, and how automated methods compare to human annotations. Automated detectors will also be improved by incorporating the third-party human annotations of MW using several techniques. Specifically, features will be engineered to capture facial expressions noted by observers and researchers, and training instances that were exceptionally well- or poorly- classified by observers will be re-weighted in automatic detector training. Finally, implications of previous results and proposed work are discussed.

« April 2017 »
April
SuMoTuWeThFrSa
1
2345678
9101112131415
16171819202122
23242526272829
30