The loss of verbal communication is one of the most devastating effects that can be wrought by injury and disease. However, in many cases, the loss results from damage not to the areas of the brain directly responsible for producing speech, but to the conduit between these areas and speech itself: the brain stem, the nerves of the face, or the organs of speech (tongue, lips, jaw, larynx). In such cases–like brain-stem stroke or ALS–it is possible in theory to restore speech, by monitoring the activity of brain cells and then “decoding” this activity into the words that were intended to be spoken.
Prof. Joseph Makin’s group has recently advanced the state of the art in neural speech decoding by applying ideas from machine translation and automated speech recognition, in particular modern neural-network architectures like sequence-to-sequence models.
In this talk, he will review their recent work in this area, both in the sub-chronic setting (in otherwise healthy epileptics) and in an ongoing clinical trial. He will conclude with a discussion of some of his group’s current work, and the challenges that remain before a neural speech prosthesis can be deployed outside the clinic.
Joseph Makin is an Assistant Professor in the School of Electrical and Computer Engineering at Purdue University. He received training in neuroscience at UCSF (2010-2020), before which he received his Ph.D. in EECS from U.C. Berkeley (2008), and his B.S./B.A. from Swarthmore College (2003, engineering/philosophy). Prof. Makin conducts research in computational neuroscience, focusing in particular on brain-machine interfaces, neural representations of uncertainty, and computational models of inference and learning in the cortex.