- Après la thèse
The course website is available at:
This course offers a perspective on phonological theory focused on computation. Questions addressed include the following: what are the formal properties of various phonological frameworks? what is the statistics behind probabilistic phonological models? what are the typological predictions of categorical and probabilistic phonological models? how should the problems of learning, production, and interpretation be properly formulated within these various phonological models? to what extent do these computational problems admit provably efficient and correct solution algorithms? how do these algorithms depend on architectural properties versus phonological substance? Methods explored include formal language theory, statistics, machine learning, and convex geometry.
Rather than providing a broad overview of the field, this course focuses each year on a few specific related topics. The current 2018 installment of the course will focus on two topics: typological structure in probabilistic constraint-based phonology (and in particular MaxEnt grammars); and deep neural networks in phonological theory (with an eye on the formal foundations). The basic idea is that constraint-based phonology stemmed out of the connectionist literature of the eighties. We thus want to understand what is new in the recent revival of neural networks and how it might impact phonology. This issue might be particularly relevant for the probabilistic strand of phonology that is keeping the field busy in recent years.