Several major innovations in artificial intelligence (AI) (e.g. convolutional neural networks, experience replay) are based on findings about the brain. However, the underlying brain findings took many years to first consolidate and many more to transfer to AI. Moreover, these findings were made using invasive methods in non-human species.
For cognitive functions that are uniquely human, such as natural language processing, there is no suitable model organism and a mechanistic understanding is that much farther away.
In this talk, Ph.D. candidate Mariya Toneva will discuss two works that circumvent these limitations by establishing a direct connection between the human brain and AI systems with two main goals: 1) to improve the generalization performance of AI systems and 2) to improve our mechanistic understanding of cognitive functions.
She will also discuss future directions that build on these approaches to investigate the role of memory in meaning composition, both in the brain and AI. This investigation will lead to methods that can be applied to a wide range of AI domains, in which it is important to adapt to new data distributions, continually learn to perform new tasks, and learn from few samples.
Mariya Toneva is a Ph.D. candidate in a joint program between Machine Learning and Neural Computation at Carnegie Mellon University, where she is advised by Tom Mitchell and Leila Wehbe. She received a B.S. in Computer Science and Cognitive Science from Yale University. Her research is at the intersection of Artificial Intelligence, Machine Learning, and Neuroscience. Mariya works on bridging language in machines with language in the brain, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems.
Contact Ginny Watterson for Zoom link.