Jacolien van Rij (Groningen University) & Juhani Järvikivi (University of Alberta)
The Visual World Paradigm step-by-step – Eye-tracking in spoken language research
This course provides an introduction to the Visual World Paradigm (VWP), an eye-tracking method that focuses on spoken language processing. VWP presents participants with pictures, scenes, or videos on the screen, while listening to spoken stimuli, making it well suited to the investigation of: lexical, sentence, and discourse processing, the interaction between visual and linguistic information, and especially online comprehension in young children and other populations.
This course will provide a practical overview of the method with both conceptual and practical training (via readings, some introductory lectures discussing the assumptions underlying the method, and mostly directed hands-on work). The course will consist of the following parts:
- On Monday, we will provide an introduction to VWP including the relevant eye-tracking measures.
- Tuesday will be hands-on, with an introduction to operating an eye-tracker for collecting both Visual World data using an actual experiment. Please note, the actual data collection will take place in groups on Tuesday during lunch and after 4.15pm (lunch will be provided). We will use the data collected on Tuesday for the following steps 3-5.
- On Wednesday, we will examine and discuss how to prepare the eye-tracking data for analysis, which involves exporting and pre-processing the data, and
- on Thursday we will learn about Data visualization.
- The last day will concentrate on pre-processing, inspecting, and cleaning the data for data analysis. If you have your own data, you are invited to bring it to the class.
||Read Salverda & Tanenhaus (2017)*
||Introduction eye tracking and VWP
||Read Porretta et al. (2017)*
operating the eye-tracker**
||Getting the data:
exporting sample or fixation reports, loading data in R
||Visualization of the results
||Finish visualization assignment (in groups)
||Preprocessing and cleaning data***
* The papers to read will be provided in pdf version when signing up for the course (or you can find them online or at your library).
** On Tuesday, all course participants are participating in actual data collection: The data collection takes place during lunch time (11:45 – 13:00; lunch will be provided) and after the lectures (after 16:15).
*** This lab session is based on real eye tracking data. So if you have your own eye tracking data that youj would like to look at, we encourage you to bring it with you (format: txt / plain text file, csv file, or R object).
References & Resources:
*Porretta, V., A. Kyröläinen, J. van Rij and J. Järvikivi (2018). “Visual world paradigm data: From preprocessing to nonlinear time-course analysis”. Intelligent Decision Technologies 2017. Proceedings of the 9th KES International Conference on Intelligent Decision Technologies (KES-IDT 2017) – Part II. Ed. by I. Czarnowski, R. J. Howlett and L. C. Jain. Smart Innovation, Systems and Technologies 73. Cham: Springer International Publishing, pp. 268-277. ISBN: 978-3-319-59424-8. DOI: 10.1007/978-3-319-59424-8_25.
*Salverda, A. P., & Michael, K. (2017). Chapter 5 – The Visual World Paradigm. In De Groot, A. M. B., & Hagoort, P. (Eds.), Research Methods in Psycholinguistics and the Neurobiology of Language: A Practical Guide. Oxford: Wiley.
Porretta, V., A. Kyröläinen, J. van Rij and J. Järvikivi (2017). VWPre: Tools for preprocessing visual world data. R package version 1.0.1. URL: https://CRAN.R-project.org/package=VWPre.