2022_11_11 «AI and ML methods for Multimodal Learning Analytics»
AI and ML methods for
Multimodal Learning Analytics
Kshitij Sharma
Department of Computer Science
Norwegian University of Science and Technology, Trondheim
1
ML pipeline (Feature extraction)
• Logs: Reading-writing (R-W) episodes; Use of debugger; Use of variable view
• E4: mean and SD of BVP, TMP, and EDA and the mean of HR
24
Context 4: Game based learning
• Learning Context → Motion based educational games
• 40 Participants
• 30 Minutes of solving mathematics problems
• Multimodal data → eye-tracking, motion, EDA, HRV, system logs
• Outcome → game performance
27
Context 4: Game based learning
Towards designing AI agent to
support students
28
Context 4: Game based learning
• Information processing Index
• Cognitive load
• Mean HR
• Grab-match Differential
Most important features for the agent
30
Context 5: Collaborative Concept Map
• Learning Context → Video based Learning + Synthesis
• 82 Participants
• 20 Minutes of concept map creation
• Multimodal data → eye-tracking, audio, dialogues, system logs
• Outcome → Collaborative Concept map correctness and individual
learning gain
31
Context 6: Collaborative ITS
• Learning Context → Intelligent Tutoring Systems
• 50 Participants
• 45 Minutes of concept map creation
• Multimodal data → eye-tracking, audio, dialogues, system logs
• Outcome → Learning gain
32
Generalizability across contexts (individual
learning)
Train Using Test Using NRMSE on test dataset
Pacman, Self-Assessment Debugging 9.24 (1.6)
Pacman, Debugging Self-Assessment 8.27 (2.1)
Self-Assessment,
Debugging
Pacman 8.26 (1.9)
Data Used: Facial videos and Wristband data (HRV, EDA)
38
Generalizability across contexts (individual
learning)
Train Using Test Using NRMSE on test dataset
Pacman, Self-Assessment,
Debugging
Motion based
game
10.94 (1.4)
Pacman, Self-Assessment,
Motion based game
Debugging 9.74 (1.1)
Pacman, Debugging, Motion
based game
Self-Assessment 9.27 (0.9)
Self-Assessment, Debugging,
Motion based game
Pacman 10.08 (1.3)
Data Used: Wristband data (HRV, EDA)
Generalizability across contexts (collaborative
learning)
Temporal Features from Eye-tracking measurements
(cognitive load, information processing index, entropy, stability,
anticipation, fixation durations)
41
Generalizability across contexts (collaborative
learning)
Data Used: Eye-tracking
Train Using Test Using NRMSE on test dataset
All Individual Contexts Collaborative
Concept map
19.89 (5.4)
All Individual Contexts Collaborative ITS 21.16 (6.2)
All Individual Contexts +
Collaborative ITS
Collaborative
Concept map
6.7 (1.5)
All Individual Contexts +
Collaborative Concept map
Collaborative ITS 6.5 (1.2)
42
What’s next???
• Online learning → System logs?
• Online learning → System logs and generated multimodal data?
• The variance in these contexts was huge → can we design something
with more similarities within the contexts?
43