Probabilistic Language Models: Human versus Machine

Guest Lecture

Lecture

December 19, 2014
2:00pm to 3:00pm
190 Doe Library
Get Directions

Attempts to build software that can process natural language have demonstrated the need for high-quality probabilistic models of word co-occurrence, and extensive effort has been devoted to building such models. Meanwhile, humans need similar models. We examine the predictions made by humans and find that many of the "hacks" used in engineered models appear to have analogues in neutral processing.

Speaker(s)

Nathaniel J. Smith

Nathaniel J. Smith is a postdoctoral scholar in informatics at the University of Edinburgh. He has broad interests in the cognitive mechanisms that allow language to be used in real time and to interact in a fine-grained, flexible, and non-modular way with non-linguistic cognition and action, which he addresses through a variety of methods, including designed experiments and corpus studies of eye-tracking, self-paced reading, EEG, and discourse analysis. He is also a NumPy core developer and the creator of PEP 465 (which gives Python an infix matrix multiplication operator), the Patsy library for feature preprocessing, and the ZS file format.