Anúncio

2022_11_11 «AI and ML methods for Multimodal Learning Analytics»

Investigación y desarrollo de tecnologías educativas em eMadrid network
21 de Nov de 2022
Anúncio

Mais conteúdo relacionado

Similar a 2022_11_11 «AI and ML methods for Multimodal Learning Analytics»(20)

Mais de eMadrid network(20)

Anúncio

2022_11_11 «AI and ML methods for Multimodal Learning Analytics»

  1. AI and ML methods for Multimodal Learning Analytics Kshitij Sharma Department of Computer Science Norwegian University of Science and Technology, Trondheim 1
  2. Who am I? 2
  3. Why do I like Multimodal Learning Analytics? 3
  4. Why I like Multimodal Learning Analytics? 4
  5. Context 1: Pacman 5
  6. Context 1: Pacman • Learning Context → Skill Acquisition • 19 Participants • 25-30 Minutes of game play • Multimodal data → eye-tracking, EEG, EDA, HRV, Facial video, keystrokes • Outcome → game score 6
  7. Context 1: Pacman Is multimodal data collection worth the effort? 7
  8. ML Pipeline 8
  9. Context 1: Pacman 9
  10. Context 2: Self-Assessment 10
  11. Context 2: Self-Assessment • Learning Context → Self Assessment • 32 Participants • 30 Minutes of solving programming problems (adaptive test) • Multimodal data → eye-tracking, EEG, EDA, HRV, Facial video, keystrokes • Outcome → test score and effort (guessing/solving) 11
  12. Context 2: Self-Assessment What about explainability of AI pipelines with Multimodal data? 12
  13. Context 2: Self-Assessment 13
  14. Context 2: Self-Assessment (Easy result) NRMSE test score 14
  15. Context 2: Self-Assessment (Challenging result) NRMSE Effort 15
  16. Context 2: Self-Assessment Predicting the future! 16
  17. • Attention • Emotional intensity • Cognitive load • Mental workload • Memory load • Heart rate • BVP • EDA
  18. Context 2: Self-Assessment Predicting the future! F-score (effortless/effort) → 0.90 18
  19. Tell about future 19
  20. Context 3: Debugging
  21. Context 3: Debugging • Learning Context → Programming • 44 Participants • 60 Minutes of solving programming problems (adaptive test) • Multimodal data → eye-tracking, EEG, EDA, HRV, Facial video, keystrokes • Outcome → debugging performance 21
  22. Context 3: Debugging Predicting complex performance! 22
  23. ML pipeline 23 Random forest
  24. ML pipeline (Feature extraction) • Logs: Reading-writing (R-W) episodes; Use of debugger; Use of variable view • E4: mean and SD of BVP, TMP, and EDA and the mean of HR 24
  25. Context 3: Debugging 25
  26. Context 4: Game based learning 26
  27. Context 4: Game based learning • Learning Context → Motion based educational games • 40 Participants • 30 Minutes of solving mathematics problems • Multimodal data → eye-tracking, motion, EDA, HRV, system logs • Outcome → game performance 27
  28. Context 4: Game based learning Towards designing AI agent to support students 28
  29. Context 4: Game based learning
  30. Context 4: Game based learning • Information processing Index • Cognitive load • Mean HR • Grab-match Differential Most important features for the agent 30
  31. Context 5: Collaborative Concept Map • Learning Context → Video based Learning + Synthesis • 82 Participants • 20 Minutes of concept map creation • Multimodal data → eye-tracking, audio, dialogues, system logs • Outcome → Collaborative Concept map correctness and individual learning gain 31
  32. Context 6: Collaborative ITS • Learning Context → Intelligent Tutoring Systems • 50 Participants • 45 Minutes of concept map creation • Multimodal data → eye-tracking, audio, dialogues, system logs • Outcome → Learning gain 32
  33. LSTM 33 Forget gate Input gate Cell State Forward propagation Output gate
  34. Contexts 5 & 6: Collaborative settings • Gaze similarity • Gaze entropy • Cognitive load • Joint mental effort • Audio: autocorrelation • Audio: energy • Audio: shape of envelope • Dialogue codes • Log events 34
  35. Contexts 5 & 6: Collaborative settings • Collaborative Concept map • Best NRMSE: 5.2 • Best combination: audio-gaze • Collaborative ITS • Best NRMSE: 5.1 • Best combination: audio-gaze 35
  36. Generalizability across contexts Deep Features from Facial data 36
  37. Generalizability across contexts Temporal Features from Physiological data 37
  38. Generalizability across contexts (individual learning) Train Using Test Using NRMSE on test dataset Pacman, Self-Assessment Debugging 9.24 (1.6) Pacman, Debugging Self-Assessment 8.27 (2.1) Self-Assessment, Debugging Pacman 8.26 (1.9) Data Used: Facial videos and Wristband data (HRV, EDA) 38
  39. Generalizability across contexts (individual learning) Train Using Test Using NRMSE on test dataset Pacman, Self-Assessment, Debugging Motion based game 10.94 (1.4) Pacman, Self-Assessment, Motion based game Debugging 9.74 (1.1) Pacman, Debugging, Motion based game Self-Assessment 9.27 (0.9) Self-Assessment, Debugging, Motion based game Pacman 10.08 (1.3) Data Used: Wristband data (HRV, EDA)
  40. Generalizability across contexts (collaborative learning) Deep Features from Eye-tracking data 40
  41. Generalizability across contexts (collaborative learning) Temporal Features from Eye-tracking measurements (cognitive load, information processing index, entropy, stability, anticipation, fixation durations) 41
  42. Generalizability across contexts (collaborative learning) Data Used: Eye-tracking Train Using Test Using NRMSE on test dataset All Individual Contexts Collaborative Concept map 19.89 (5.4) All Individual Contexts Collaborative ITS 21.16 (6.2) All Individual Contexts + Collaborative ITS Collaborative Concept map 6.7 (1.5) All Individual Contexts + Collaborative Concept map Collaborative ITS 6.5 (1.2) 42
  43. What’s next??? • Online learning → System logs? • Online learning → System logs and generated multimodal data? • The variance in these contexts was huge → can we design something with more similarities within the contexts? 43
Anúncio