4. 1.
Personalized
learning paths
• Dynamically adapting the path as
the learner goes along
• Pre-generated based on job roles,
org chart, competencies required,
etc.
• Assess how each learner interacts
with the content and adapt test
questions accordingly
6. 2.
Chat bots
• Finding answers in context – quick
reference guide
• Use for coaching and performance
support, like an interactive job aid,
or a virtual coach, pushing out info
and providing feedback
• Can potentially tap into various
sources of information that are
distributed across the organization
(used as knowledge management
tool)
9. 3.
Performance
indicator
• Pinpoint Pattern – e.g. significant
spikes in course failing
• Learner engagement data –
analyze data and identify patterns
that suggest content could be
better written, completely
redesigned, or to provide
additional support to your
learners if they are failing to
complete a course or a learning
activity
12. What Could Go Wrong?
What are some of the bias, risks, and
how do we mitigate them?
Photo by Andrew Childress on Unsplash
13. 1.
Prediction could
be too
prescriptive
• Learning is very hard to personalize
and recommend
• Past behavior is not always the best
predictior for future learning
preference
• by no means can replace instructor
observations, and feedback
gathered from learners, peers, and
their managers
• Past behavior is not always the best
prediction for future learning
preference
14. 2.
Adapative
learning is costly
to build
• Granularity is an issue
• A content system requires vigor in
constantly updating and
monitoring the usage of the content
• We get caught up in thinking
adaptivity is always the way to go
• We need to focus on good
instructional design and interaction
principles first
15. 3.
Algorithm black
box
• Algorithms used by organizations
can be opaque and discriminatory
• Needs to be explicit about the
decisions it makes to
recommend/not recommend
certain learning paths or options
over the other
• There is a lack of diversity
16. 4.
Issue of trust
• Ownership and governance of data
is often ill-defined (or not defined at
all) for many organizations
• What rights do learners have in
acting upon the
recommendations/predictions or
not
• The decision making process
behind the scene is lacking
transparency and accountability
• Needs to be learner-driven and
learner controlled
17. The notion of
“Explainable AI”
• Entails different levels of
complexity
• Shows the variables being
considered
• Users can manually adjust these
variables
• The importance of knowing where
the data came from and how they
got them -
18. “As developers, designers, business
owners, and educators, we have a
responsibility to actively engage in the
decision making process, safeguard the
practice and to set some guidelines
around this.
Photo by Ricardo Gomez Angel on Unsplash
19. Thanks!
Connect with me:
▫ https://www.linkedin.com/in/stellal/
▫ stella@paradoxlearning.com
Photo by NordWood Themes on Unsplash
Editor's Notes
Good morning.
As the “One-size-fits-all” approach of e-learning loses its appeal, there is a move toward more personalized and adaptive learning. Specifically, the ability to change learning on the fly and to provide real-time feedback to learners.
The promise today is that we can make use of machine learning algorithm to adapt to the learners in many different ways to achieve better learning outcomes.
Change content on the fly
No-Touch Individualization feature
Instantly evaluate the efficacy of learning materials, and maps existing content to assessments and regulations
Learning Intelligence Proof of concept program - collects real-time, fine granular behavioral data of up to 100 learners without a full technical integration.
Chatbots, as you know, are nothing new. In the old days, chatbots are confined to a pre-defined set of answers. But things are getting a lot more sophisticated now, and are being integrated as a larger part of corporate learning strategy, especially in the area of knowledge management. They can find answers in context – act as a quick reference guide.
Use for coaching and performance support, like an interactive job aid, or a virtual coach, pushing out info and providing feedback.
Single source of information and resources for internal team to access.
Integration with Google Drive, Evernotes, Dropbox, and a few others for source of information
Built for Slack
Can push knowledge out – what they called Flow
An example of an onboarding conversation to take a new team member step by step.
It goes to various data source to push out information
Some of the generic indicators within LMS for learner retention and completion rates are: Recency and frequency of login, participation in discussion forum, assignments turned in on time, even clickstreams
In academic world, they also would factor in GPA, overall grades/curving of grades, SAT scores, etc.
Predict and present the most relevant content to learners by automatically creating personal learning paths.
Identify at risk learners by predicting learners’ final grades for a course. Visual dashboard with Risk Quadrants.
Give instructors more actionable insights.
The indicators are normally around frequency of log in, duration of log in, grades, engagement in discussion, etc.
Do not measure quality of interaction
For example, for many years, there are attempts to cater to different “learning styles”. But contrary to research findings, learning styles are myths.
Just because a learner preferred a certain type of learning (e.g. video-based learning), it doesn’t mean this person always prefer it. Largely context dependent. If I am at a noisy café, I might just want to read the document. If I am learning a foreign language, I might want to listen and practice.
People’s learning preferences change over time, context, and many other variables.
User ownership of data
Ability for end users to provide input
Know what some of the assumptions the predictive models are based on
Hard to balance cost vs transparency without slowing it down