Multi-Modal Learning Analytics

Semester: 

Spring
We are starting to witness a data deluge in education. Are we sinking beneath a data stream that we don’t know how to manage and interpret, or can more data actually help us better understand students and design more compelling learning experiences? One particularly promising development--made possible by the advent of affordable sensing technology--is the emerging field of multi-modal learning analytics (MMLA). Recently MMLA has allowed researchers to gain new insights into learning, for instance by studying collaboration between students with synchronized eye-trackers or by estimating their cognitive state using Kinect data or their engagement through emotion detection tools. In this class we will focus on cutting-edge MMLA methods to collect datasets in various learning environments and analyze them based on theoretical frameworks in the learning sciences. More specifically, students will learn to: (1) collect large datasets in different learning environments (e.g., mostly classrooms, but also potentially maker spaces or museums) from sensors such as Arduino-like platforms, motion sensors, or eye-trackers; (2) interpret those datasets using various kinds of visualizations and data mining techniques; (3) connect measurements from those sensors with theoretical constructs in the learning sciences; (4) think critically about data and what it can (and cannot) tell us; and (5) if time permits, prototype interventions using sensing technology. Deliverables will include group projects, in-class presentations of weekly readings and a short final memo.