Monday May 17, Tuesday May 18, Thursday May 20, Friday May 21
8:00am – 10:00am MDT (GMT-6)
Vector space morphology is a tool for studying word structure and lexical processing under the assumption that both words’ forms and their meanings can be represented by points in a high-dimensional space. Under this assumption, simple mappings between forms and meanings can be set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the endstate of learning can be estimated. In both cases, learning is discriminative, driven by prediction error, and it is this error which calibrates the association strength between input and output representations. Since mathematically, the algorithms we have been exploring basically implement multivariate multiple regression, vector space morphology provides a cognitively motivated statistical modeling approach to the mental lexicon. The first session will introduce the general concepts motivating a discriminative learning approach to the mental lexicon, and will introduce the core mathematical ideas by means of both toy examples and examples of larger datasets. The second session will discuss the modeling of auditory comprehension and predictions derived from the model for speech production. The third session shifts focus to seeing whether the model makes useful predictions for understanding lexical processing, and whether it predicts priming effects that are typically understood as evidence for morphemes. The final session concludes with two case studies, one addressing the meaning of nonwords, and the other the modeling of regular and irregular verbs in aphasic speech in English. Each session will be accompanied by a hands-on session with worked examples of how to use the modeling toolkit to address linguistic and psycholinguistic data.
Baayen, R. H., Chuang, Y. Y., Shafaei-Bajestan, E., and Blevins, J. P. (2019). The discriminative lexicon: A unified computational model for the lexicon and lexical processing in comprehension and production grounded not in (de)composition but in linear discriminative learning. Complexity, 2019, 1-39. http://www.sfs.uni-tuebingen.de/~hbaayen/publications/baayenChuangShafaeiBlevins.pdf
Baayen, R. H., and Smolka, E. (2020). Modeling morphological priming in German with naive discriminative learning. Frontiers in Communication, section Language Sciences, 1-40. http://www.sfs.uni-tuebingen.de/~hbaayen/publications/baayensmolka2019.pdf
Chuang, Y-Y., Bell, M. J., Banke, I., and Baayen, R. H. (2020). Bilingual and Multilingual Mental Lexicon: A Modeling Study With Linear Discriminative Learning. Language Learning , 1-73. http://www.sfs.uni-tuebingen.de/~hbaayen/publications/ChuangEtAl2020.pdf
Chuang, Y.-Y., Lõo, K., Blevins, J. P., and Baayen, R. H. (2020). Estonian case inflection made simple. A case study in Word and Paradigm morphology with Linear Discriminative Learning. In Körtvélyessy, L., and Štekauer, P. (Eds.) Complex Words: Advances in Morphology, 119–14. http://www.sfs.uni-tuebingen.de/~hbaayen/publications/ChuangEtAlECI2019.pdf
Chuang, Y-Y., Vollmer, M-l., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H. (2020). The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using Linear Discriminative Learning. Behavior Research Methods, 1-51. http://www.sfs.uni-tuebingen.de/~hbaayen/publications/ChuangVollmerEtAl2020.pdf