Home > Events > Oral Candidacy - Nate Blanchard

Oral Candidacy - Nate Blanchard

Start: 5/11/2017 at 2:00PM
End: 5/11/2017 at 5:00PM
Location: 100 Stinson Remick
Attendees: Faculty and staff are welcome to attend the presentation portion of the defense.
Add to calendar:
iCal vCal

Nate Blanchard
Oral Candidacy
May 11, 2017        2:00 pm        100 Stinson
Adviser:  Dr. Walter Scheirer
Committee:
Dr. Kevin Bowyer          Dr. Christopher Forstall          Dr. Sean Kelly

Title:

Machine Learning Model Search Guided by Internal Representation

Abstract: 

Traditionally, when a machine learning model is evaluated on its generalizability, the evaluation uses metrics of correctness, such as accuracy, Cohen’s kappa, or precision/recall. However, this approach is inherently flawed, because it assumes that instances fit neatly into discrete classes, and thus disregards instance ambiguity (e.g., classifying a bread bowl as either bread or a bowl). I propose a new evaluation technique which identifies generalizable models by comparing model activations alongside instance similarity.

In this proposal, I present, evaluate, and discuss issues with evaluation of two machine-learned models. The first model is trained to detect learners’ mind wandering while reading and the second model identifies teachers’ questions from live classrooms recordings of their speech. Next, I propose a new evaluation technique based on the psychological theory of internal representation, which suggests the human brain classifies instances by comparing them against its internal class representations. Internal representation theory observes that neural activation similarity corresponds to instance similarity. I propose that by identifying machine learning models which share this property, we can identify models that are more “brain-like,” and will thus mirror the broad generalization abilities of human brains.

In order to explore this possibility, I will perform two experiments using comparisons of model activations to identify generalizable ones. The initial experiment will constrain and simplify the problem by using procedural graphics to control instance similarity and confirm that this evaluation is effective. However, this evaluation technique must to be applicable outside of the vision domain. Thus, in a follow up experiment, I will explore metrics to quantify instance similarity from unconstrained real-world text data.

In the first experiment, models will be trained and evaluated on images of synthetic 3D objects. This experiment will focus on the feasibility of intelligently searching hyperparameter space to identify generalizable models. I will confirm that identified models performed well, and show expected

The second experiment will use real-world transcripts of recorded speech to compare various metrics for text similarity and determine which metric is best for identifying generalizable models in the text classification domain. Visual objects have implicit inter-class similarity, but it is less clear how to measure inter-class similarity within text (e.g., measuring similarity at the word, sentence, or conversation level) in a way that will allow comparison with internal representations of meaning. These results will provide quantitative evidence of which metric best quantifies text generalization methods.

This proposed work will provide an alternative to traditional evaluation measures, which fail to fully capture machine learning model generalizability, and will aid in the identification of generalizable models across a range of domains.