The document describes research on human-centered AI and interactive explanation methods. It discusses explainable AI and the goals of explaining model outcomes to increase user trust and acceptance, and enabling users to interact with the explanation process to improve models. It then provides an overview of the Augment/HCI research group at KU Leuven and its work on explanation methods, recommendation techniques, and evaluating explanations through user studies.
Tutorial on User Profiling with Graph Neural Networks and Related Beyond-Acc...Erasmo Purificato
Slide of the Tutorial on "User Profiling with Graph Neural Networks and Related Beyond-Accuracy Perspectives" @ UMAP'23: 31st ACM Conference on User Modeling, Adaptation and Personalization (June 26, 2023 | Limassol, Cyprus)
This document provides a summary of a presentation on explainable AI for non-expert users. The 3 main points are:
1. The presentation discusses developing the next generation of interactive and adaptive explanation methods for AI systems to increase user trust and acceptance by enabling users to interact with the explanation process.
2. Several application domains of explainable AI are mentioned, including recommender systems, visualization, intelligent user interfaces, learning analytics, healthcare, and precision agriculture.
3. Research was presented on explaining recommendations to users and evaluating the effects of explanations and different levels of user control on user acceptance, trust, and cognitive load. The importance of personalization and enabling user control was emphasized.
[Phd Thesis Defense] CHAMELEON: A Deep Learning Meta-Architecture for News Re...Gabriel Moreira
Presentation of the Phd. thesis defense of Gabriel de Souza Pereira Moreira at Instituto Tecnológico de Aeronáutica (ITA), on Dec. 09, 2019, in São José dos Campos, Brazil.
Abstract:
Recommender systems have been increasingly popular in assisting users with their choices, thus enhancing their engagement and overall satisfaction with online services. Since the last decade, recommender systems became a topic of increasing interest among machine learning, human-computer interaction, and information retrieval researchers.
News recommender systems are aimed to personalize users experiences and help them discover relevant articles from a large and dynamic search space. Therefore, it is a challenging scenario for recommendations. Large publishers release hundreds of news daily, implying that they must deal with fast-growing numbers of items that get quickly outdated and irrelevant to most readers. News readers exhibit more unstable consumption behavior than users in other domains such as entertainment. External events, like breaking news, affect readers interests. In addition, the news domain experiences extreme levels of sparsity, as most users are anonymous, with no past behavior tracked.
Since 2016, Deep Learning methods and techniques have been explored in Recommender Systems research. In general, they can be divided into methods for: Deep Collaborative Filtering, Learning Item Embeddings, Session-based Recommendations using Recurrent Neural Networks (RNN), and Feature Extraction from Items' Unstructured Data such as text, images, audio, and video.
The main contribution of this research was named CHAMELEON a meta-architecture designed to tackle the specific challenges of news recommendation. It consists of a modular reference architecture which can be instantiated using different neural building blocks.
As information about users' past interactions is scarce in the news domain, information such as the user context (e.g., time, location, device, the sequence of clicks within the session), static and dynamic article features like the article textual content and its popularity and recency, are explicitly modeled in a hybrid session-based recommendation approach using RNNs.
The recommendation task addressed in this work is the next-item prediction for user sessions, i.e., "what is the next most likely article a user might read in a session?". A temporal offline evaluation is used for a realistic offline evaluation of such task, considering factors that affect global readership interests like popularity, recency, and seasonality.
Experiments performed with two large datasets have shown the effectiveness of the CHAMELEON for news recommendation on many quality factors such as accuracy, item coverage, novelty, and reduced item cold-start problem, when compared to other traditional and state-of-the-art session-based algorithms.
Hands on Explainable Recommender Systems with Knowledge Graphs @ RecSys22GiacomoBalloccu
This document provides an overview of an upcoming tutorial on explainable recommender systems with knowledge graphs. The tutorial will include two sessions - an introductory session on explainable recommendation principles and modeling approaches, and a hands-on session using Jupyter notebooks to build and evaluate recommendation models using knowledge graphs. Attendees will learn about explainable recommendation methods, loading and preprocessing interaction datasets with knowledge graphs, building recommendation models with knowledge graphs, and evaluating and generating explanations from models. The tutorial aims to help attendees understand explainable recommender systems and apply techniques using knowledge graphs.
This document provides an overview of deep recommender systems and some of their shortcomings. It discusses neural network architectures like NeuMF, Wide&Deep, Neural FM, DeepFM, and DSCF that have been applied to recommendation. It also covers sequential recommendation methods, optimization techniques, and challenges like short-term rewards, manually designed architectures, isolated data, and security issues like poisoning attacks.
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
The document describes a PhD dissertation on linked data-based recommender systems. It presents an AlLied framework for executing and analyzing recommendation algorithms based on linked data. The framework includes implementations of graph-based and machine learning algorithms. An evaluation compares the performance of different graph-based algorithms using a user study on film recommendations. The results show that algorithms combining traversal and hierarchical approaches have the best balance of accuracy and novelty.
Tutorial on User Profiling with Graph Neural Networks and Related Beyond-Acc...Erasmo Purificato
Slide of the Tutorial on "User Profiling with Graph Neural Networks and Related Beyond-Accuracy Perspectives" @ UMAP'23: 31st ACM Conference on User Modeling, Adaptation and Personalization (June 26, 2023 | Limassol, Cyprus)
This document provides a summary of a presentation on explainable AI for non-expert users. The 3 main points are:
1. The presentation discusses developing the next generation of interactive and adaptive explanation methods for AI systems to increase user trust and acceptance by enabling users to interact with the explanation process.
2. Several application domains of explainable AI are mentioned, including recommender systems, visualization, intelligent user interfaces, learning analytics, healthcare, and precision agriculture.
3. Research was presented on explaining recommendations to users and evaluating the effects of explanations and different levels of user control on user acceptance, trust, and cognitive load. The importance of personalization and enabling user control was emphasized.
[Phd Thesis Defense] CHAMELEON: A Deep Learning Meta-Architecture for News Re...Gabriel Moreira
Presentation of the Phd. thesis defense of Gabriel de Souza Pereira Moreira at Instituto Tecnológico de Aeronáutica (ITA), on Dec. 09, 2019, in São José dos Campos, Brazil.
Abstract:
Recommender systems have been increasingly popular in assisting users with their choices, thus enhancing their engagement and overall satisfaction with online services. Since the last decade, recommender systems became a topic of increasing interest among machine learning, human-computer interaction, and information retrieval researchers.
News recommender systems are aimed to personalize users experiences and help them discover relevant articles from a large and dynamic search space. Therefore, it is a challenging scenario for recommendations. Large publishers release hundreds of news daily, implying that they must deal with fast-growing numbers of items that get quickly outdated and irrelevant to most readers. News readers exhibit more unstable consumption behavior than users in other domains such as entertainment. External events, like breaking news, affect readers interests. In addition, the news domain experiences extreme levels of sparsity, as most users are anonymous, with no past behavior tracked.
Since 2016, Deep Learning methods and techniques have been explored in Recommender Systems research. In general, they can be divided into methods for: Deep Collaborative Filtering, Learning Item Embeddings, Session-based Recommendations using Recurrent Neural Networks (RNN), and Feature Extraction from Items' Unstructured Data such as text, images, audio, and video.
The main contribution of this research was named CHAMELEON a meta-architecture designed to tackle the specific challenges of news recommendation. It consists of a modular reference architecture which can be instantiated using different neural building blocks.
As information about users' past interactions is scarce in the news domain, information such as the user context (e.g., time, location, device, the sequence of clicks within the session), static and dynamic article features like the article textual content and its popularity and recency, are explicitly modeled in a hybrid session-based recommendation approach using RNNs.
The recommendation task addressed in this work is the next-item prediction for user sessions, i.e., "what is the next most likely article a user might read in a session?". A temporal offline evaluation is used for a realistic offline evaluation of such task, considering factors that affect global readership interests like popularity, recency, and seasonality.
Experiments performed with two large datasets have shown the effectiveness of the CHAMELEON for news recommendation on many quality factors such as accuracy, item coverage, novelty, and reduced item cold-start problem, when compared to other traditional and state-of-the-art session-based algorithms.
Hands on Explainable Recommender Systems with Knowledge Graphs @ RecSys22GiacomoBalloccu
This document provides an overview of an upcoming tutorial on explainable recommender systems with knowledge graphs. The tutorial will include two sessions - an introductory session on explainable recommendation principles and modeling approaches, and a hands-on session using Jupyter notebooks to build and evaluate recommendation models using knowledge graphs. Attendees will learn about explainable recommendation methods, loading and preprocessing interaction datasets with knowledge graphs, building recommendation models with knowledge graphs, and evaluating and generating explanations from models. The tutorial aims to help attendees understand explainable recommender systems and apply techniques using knowledge graphs.
This document provides an overview of deep recommender systems and some of their shortcomings. It discusses neural network architectures like NeuMF, Wide&Deep, Neural FM, DeepFM, and DSCF that have been applied to recommendation. It also covers sequential recommendation methods, optimization techniques, and challenges like short-term rewards, manually designed architectures, isolated data, and security issues like poisoning attacks.
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
The document describes a PhD dissertation on linked data-based recommender systems. It presents an AlLied framework for executing and analyzing recommendation algorithms based on linked data. The framework includes implementations of graph-based and machine learning algorithms. An evaluation compares the performance of different graph-based algorithms using a user study on film recommendations. The results show that algorithms combining traversal and hierarchical approaches have the best balance of accuracy and novelty.
Deep learning-for-pose-estimation-wyang-defenseWei Yang
This document summarizes a thesis proposal on using deep learning for articulated human pose estimation. The proposed method uses a deep convolutional neural network (DCNN) as a front-end to extract local appearance features of body parts, combined with message passing layers to model spatial relationships between parts through pairwise constraints. This global pose model is trained end-to-end using a max-sum algorithm to maximize consistency across the entire human pose. Experimental results on standard pose estimation datasets demonstrate state-of-the-art performance.
The document presents a presentation on detection and recognition of text using a YOLO-based framework. It discusses the contents, introduction, motivation, challenges, literature review, identified research gaps, objectives, methodology, results and discussion, and future scope of the work. The methodology section describes the pre-processing, model tuning, text detection algorithm, and text recognition approach. The results show that the proposed YOLOv4 framework achieves promising results on various datasets compared to existing techniques, especially on the ICDAR2013 dataset. The conclusion states that the framework overcomes various challenges and obtains optimum results.
Slides, thesis dissertation defense, deep generative neural networks for nove...mehdi Cherti
In recent years, significant advances made in deep neural networks enabled the creation
of groundbreaking technologies such as self-driving cars and voice-enabled
personal assistants. Almost all successes of deep neural networks are about prediction,
whereas the initial breakthroughs came from generative models. Today,
although we have very powerful deep generative modeling techniques, these techniques
are essentially being used for prediction or for generating known objects
(i.e., good quality images of known classes): any generated object that is a priori
unknown is considered as a failure mode (Salimans et al., 2016) or as spurious
(Bengio et al., 2013b). In other words, when prediction seems to be the only
possible objective, novelty is seen as an error that researchers have been trying hard
to eliminate. This thesis defends the point of view that, instead of trying to eliminate
these novelties, we should study them and the generative potential of deep nets
to create useful novelty, especially given the economic and societal importance of
creating new objects in contemporary societies. The thesis sets out to study novelty
generation in relationship with data-driven knowledge models produced by
deep generative neural networks. Our first key contribution is the clarification of
the importance of representations and their impact on the kind of novelties that
can be generated: a key consequence is that a creative agent might need to rerepresent
known objects to access various kinds of novelty. We then demonstrate
that traditional objective functions of statistical learning theory, such as maximum
likelihood, are not necessarily the best theoretical framework for studying novelty
generation. We propose several other alternatives at the conceptual level. A second
key result is the confirmation that current models, with traditional objective
functions, can indeed generate unknown objects. This also shows that even though
objectives like maximum likelihood are designed to eliminate novelty, practical
implementations do generate novelty. Through a series of experiments, we study
the behavior of these models and the novelty they generate. In particular, we propose
a new task setup and metrics for selecting good generative models. Finally,
the thesis concludes with a series of experiments clarifying the characteristics of
models that can exhibit novelty. Experiments show that sparsity, noise level, and
restricting the capacity of the net eliminates novelty and that models that are better
at recognizing novelty are also good at generating novelty
Talk with Yves Raimond at the GPU Tech Conference on Marth 28, 2018 in San Jose, CA.
Abstract:
In this talk, we will survey how Deep Learning methods can be applied to personalization and recommendations. We will cover why standard Deep Learning approaches don't perform better than typical collaborative filtering techniques. Then we will survey we will go over recently published research at the intersection of Deep Learning and recommender systems, looking at how they integrate new types of data, explore new models, or change the recommendation problem statement. We will also highlight some of the ways that neural networks are used at Netflix and how we can use GPUs to train recommender systems. Finally, we will highlight promising new directions in this space.
Tutorial: Context In Recommender SystemsYONG ZHENG
This document provides an overview of a tutorial on context-aware recommender systems. The tutorial will cover traditional recommendation techniques, context-aware recommendation which incorporates additional contextual information such as time and location, and context suggestion. It includes an agenda with topics, background information on recommender systems and evaluation metrics, and descriptions of techniques for context-aware recommendation including context filtering and modeling.
This document discusses domain adaptation techniques for machine learning models. It summarizes several papers on domain adaptation methods, including domain-adversarial training which uses a gradient reversal layer to learn domain-invariant features, adversarial discriminative domain adaptation which matches features from source and target domains, and maximum classifier discrepancy which trains generators and classifiers adversarially to preserve decision boundaries during domain adaptation. The document provides an overview of common domain adaptation scenarios and different approaches for supervised, unsupervised, and domain generalization settings.
Tutorial on Bias in Rec Sys @ UMAP2020Mirko Marras
This document provides an outline for a workshop on data and algorithmic bias in recommender systems. The workshop will cover foundational concepts in the morning session, including principles of recommendation, data and sources of algorithmic bias. The afternoon session will involve hands-on case studies exploring biases such as item popularity bias and provider fairness. The workshop aims to raise awareness of bias issues in recommendations and showcase approaches for mitigating bias.
[Mmlab seminar 2016] deep learning for human pose estimationWei Yang
This document summarizes recent advances in deep learning approaches for human pose estimation. It describes early methods like DeepPose that used cascades of regressors. Later works introduced heatmap regression to capture spatial information. Convolutional Pose Machine and Stacked Hourglass networks further improved accuracy by incorporating stronger context modeling through deeper networks with larger receptive fields and intermediate supervision. These approaches demonstrate that both local appearance cues and modeling of global context and structure are important for accurate human pose estimation.
The document provides an introduction and overview of auto-encoders, including their architecture, learning and inference processes, and applications. It discusses how auto-encoders can learn hierarchical representations of data in an unsupervised manner by compressing the input into a code and then reconstructing the output from that code. Sparse auto-encoders and stacking multiple auto-encoders are also covered. The document uses handwritten digit recognition as an example application to illustrate these concepts.
Techniques for Context-Aware and Cold-Start RecommendationsMatthias Braunhofer
Context-aware recommender systems better identify interesting items for users by adapting their suggestions to the specific contextual situations, e.g., to the current weather, if an excursion is to be recommended . But, the cold-start problem may jeopardise the quality of the recommendations: for users, items or contextual situations that are new to the system, recommendations are hard to compute. We have developed a number of novel techniques to tame this problem, and in particular, new hybrid algorithms that combine several, simpler, algorithms in order to exploit their strengths and avoid their weaknesses. We have also developed algorithms for actively identifying the most useful preference information to ask the user in order to bootstrap the system. Our results obtained from a series of offline and online experiments reveal that the proposed techniques can effectively alleviate the cold-start problem of context-aware recommender systems.
Understanding how high powered ML models arrive at their predictions is an important aspect of Machine Learning, and SHAP is a powerful tool that enables practitioners to understand how different features combine to help a model arrive at a prediction.
This slidedeck is from a presentation given at pydata global on the theoretical foundations of SHAP as well as how to use its library. Link to the presentation can be found here: https://pydata.org/global2021/schedule/presentation/3/behind-the-black-box-how-to-understand-any-ml-model-using-shap/
The document discusses computational approaches to understanding brain mechanisms of action recognition and emotion. It describes experiments using a Hopfield network to model how emotions may emerge from energy regulation in the brain. The network was trained on patterns and tested on contaminated patterns to analyze convergence time and error. Experiments were also conducted using a robot camera and Hopfield network to observe internal dynamics. The document then discusses research on mirror neurons, including experiments recording neurons in monkeys during object observation and execution conditions. Several neurons were identified as potential mirror neuron candidates based on responses during both conditions. Cross-decoding analysis also provided evidence for similar neural representations.
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
Thesis submitted by Andrews Cordolino Sobral at Université de La Rochelle to fulfill the degree of Doctor of Philosophy.
Robust Low-rank and Sparse Decomposition for Moving Object Detection - From Matrices to Tensors
Matthias Feys (ML6) – Bias in ML: A Technical IntroCodiax
This document discusses bias in machine learning. It defines different types of bias, including sample bias, prejudicial bias, exclusion bias, and measurement bias. It also discusses various definitions of fairness, including unawareness, group fairness, and individual fairness. The document outlines group fairness metrics like demographic parity, equal opportunity, and equalized odds. It describes causes of bias and algorithms to mitigate bias, such as reducing bias in data, classifiers, and predictions. Overall, the document provides an overview of bias in ML, definitions of fairness, sources of bias, and methods for debiasing models.
GTC 2021: Counterfactual Learning to Rank in E-commerceGrubhubTech
Many ecommerce companies have extensive logs of user behavior such as clicks and conversions. However, if supervised learning is naively applied, then systems can suffer from poor performance due to bias and feedback loops. Using techniques from counterfactual learning we can leverage log data in a principled manner in order to model user behaviour and build personalized recommender systems. At Grubhub, a user journey begins with recommendations and the vast majority of conversions are powered by recommendations. Our recommender policies can drive user behavior to increase orders and/or profit. Accordingly, the ability to rapidly iterate and experiment is very important. Because of our powerful GPU workflows, we can iterate 200% more rapidly than with counterpart CPU workflows. Developers iterate ideas with notebooks powered by GPUs. Hyperparameter spaces are explored up to 8x faster with multi-GPUs Ray clusters. Solutions are shipped from notebooks to production in half the time with nbdev. With our accelerated DS workflows and Deep Learning on GPUs, we were able to deliver a +12.6% conversion boost in just a few months. In this talk we hope to present modern techniques for industrial recommender systems powered by GPU workflows. First a small background on counterfactual learning techniques, then followed by practical information and data from our industrial application.
By Alex Egg, accepted to Nvidia GTC 2021 Conference
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Towards the next generation of interactive and adaptive explanation methodsKatrien Verbert
This document summarizes a presentation given by Katrien Verbert on explainable artificial intelligence and interactive explanation methods. It discusses Verbert's research group at KU Leuven which focuses on areas like recommender systems, visualization, and intelligent user interfaces. The presentation provides an overview of explainable AI, discussing objectives like explaining model outcomes to increase trust and allowing user interaction with explanations. It describes various recommendation techniques and presents examples of explainable recommendation systems. The presentation discusses how personal user characteristics can impact the effects of explanations and outlines related user studies. Finally, it summarizes several of Verbert's application areas for explainable AI like education, analytics, agriculture, and healthcare, touching on methodologies and results.
Human-centered AI: how can we support end-users to interact with AI?Katrien Verbert
Human-centered AI: how can we support end-users to interact with AI?
This document discusses how to design human-centered AI systems that support end-users. It explores explaining model outcomes to increase trust and acceptance, and enabling users to interact with explanation processes. Personal characteristics like need for cognition impact how users respond to explanations. Explanations should be personalized and allow different levels of detail. Evaluations show explanations improve understanding but also increase cognitive load, so simplification is important. The goal is to preserve human control and ensure AI meets user needs.
Deep learning-for-pose-estimation-wyang-defenseWei Yang
This document summarizes a thesis proposal on using deep learning for articulated human pose estimation. The proposed method uses a deep convolutional neural network (DCNN) as a front-end to extract local appearance features of body parts, combined with message passing layers to model spatial relationships between parts through pairwise constraints. This global pose model is trained end-to-end using a max-sum algorithm to maximize consistency across the entire human pose. Experimental results on standard pose estimation datasets demonstrate state-of-the-art performance.
The document presents a presentation on detection and recognition of text using a YOLO-based framework. It discusses the contents, introduction, motivation, challenges, literature review, identified research gaps, objectives, methodology, results and discussion, and future scope of the work. The methodology section describes the pre-processing, model tuning, text detection algorithm, and text recognition approach. The results show that the proposed YOLOv4 framework achieves promising results on various datasets compared to existing techniques, especially on the ICDAR2013 dataset. The conclusion states that the framework overcomes various challenges and obtains optimum results.
Slides, thesis dissertation defense, deep generative neural networks for nove...mehdi Cherti
In recent years, significant advances made in deep neural networks enabled the creation
of groundbreaking technologies such as self-driving cars and voice-enabled
personal assistants. Almost all successes of deep neural networks are about prediction,
whereas the initial breakthroughs came from generative models. Today,
although we have very powerful deep generative modeling techniques, these techniques
are essentially being used for prediction or for generating known objects
(i.e., good quality images of known classes): any generated object that is a priori
unknown is considered as a failure mode (Salimans et al., 2016) or as spurious
(Bengio et al., 2013b). In other words, when prediction seems to be the only
possible objective, novelty is seen as an error that researchers have been trying hard
to eliminate. This thesis defends the point of view that, instead of trying to eliminate
these novelties, we should study them and the generative potential of deep nets
to create useful novelty, especially given the economic and societal importance of
creating new objects in contemporary societies. The thesis sets out to study novelty
generation in relationship with data-driven knowledge models produced by
deep generative neural networks. Our first key contribution is the clarification of
the importance of representations and their impact on the kind of novelties that
can be generated: a key consequence is that a creative agent might need to rerepresent
known objects to access various kinds of novelty. We then demonstrate
that traditional objective functions of statistical learning theory, such as maximum
likelihood, are not necessarily the best theoretical framework for studying novelty
generation. We propose several other alternatives at the conceptual level. A second
key result is the confirmation that current models, with traditional objective
functions, can indeed generate unknown objects. This also shows that even though
objectives like maximum likelihood are designed to eliminate novelty, practical
implementations do generate novelty. Through a series of experiments, we study
the behavior of these models and the novelty they generate. In particular, we propose
a new task setup and metrics for selecting good generative models. Finally,
the thesis concludes with a series of experiments clarifying the characteristics of
models that can exhibit novelty. Experiments show that sparsity, noise level, and
restricting the capacity of the net eliminates novelty and that models that are better
at recognizing novelty are also good at generating novelty
Talk with Yves Raimond at the GPU Tech Conference on Marth 28, 2018 in San Jose, CA.
Abstract:
In this talk, we will survey how Deep Learning methods can be applied to personalization and recommendations. We will cover why standard Deep Learning approaches don't perform better than typical collaborative filtering techniques. Then we will survey we will go over recently published research at the intersection of Deep Learning and recommender systems, looking at how they integrate new types of data, explore new models, or change the recommendation problem statement. We will also highlight some of the ways that neural networks are used at Netflix and how we can use GPUs to train recommender systems. Finally, we will highlight promising new directions in this space.
Tutorial: Context In Recommender SystemsYONG ZHENG
This document provides an overview of a tutorial on context-aware recommender systems. The tutorial will cover traditional recommendation techniques, context-aware recommendation which incorporates additional contextual information such as time and location, and context suggestion. It includes an agenda with topics, background information on recommender systems and evaluation metrics, and descriptions of techniques for context-aware recommendation including context filtering and modeling.
This document discusses domain adaptation techniques for machine learning models. It summarizes several papers on domain adaptation methods, including domain-adversarial training which uses a gradient reversal layer to learn domain-invariant features, adversarial discriminative domain adaptation which matches features from source and target domains, and maximum classifier discrepancy which trains generators and classifiers adversarially to preserve decision boundaries during domain adaptation. The document provides an overview of common domain adaptation scenarios and different approaches for supervised, unsupervised, and domain generalization settings.
Tutorial on Bias in Rec Sys @ UMAP2020Mirko Marras
This document provides an outline for a workshop on data and algorithmic bias in recommender systems. The workshop will cover foundational concepts in the morning session, including principles of recommendation, data and sources of algorithmic bias. The afternoon session will involve hands-on case studies exploring biases such as item popularity bias and provider fairness. The workshop aims to raise awareness of bias issues in recommendations and showcase approaches for mitigating bias.
[Mmlab seminar 2016] deep learning for human pose estimationWei Yang
This document summarizes recent advances in deep learning approaches for human pose estimation. It describes early methods like DeepPose that used cascades of regressors. Later works introduced heatmap regression to capture spatial information. Convolutional Pose Machine and Stacked Hourglass networks further improved accuracy by incorporating stronger context modeling through deeper networks with larger receptive fields and intermediate supervision. These approaches demonstrate that both local appearance cues and modeling of global context and structure are important for accurate human pose estimation.
The document provides an introduction and overview of auto-encoders, including their architecture, learning and inference processes, and applications. It discusses how auto-encoders can learn hierarchical representations of data in an unsupervised manner by compressing the input into a code and then reconstructing the output from that code. Sparse auto-encoders and stacking multiple auto-encoders are also covered. The document uses handwritten digit recognition as an example application to illustrate these concepts.
Techniques for Context-Aware and Cold-Start RecommendationsMatthias Braunhofer
Context-aware recommender systems better identify interesting items for users by adapting their suggestions to the specific contextual situations, e.g., to the current weather, if an excursion is to be recommended . But, the cold-start problem may jeopardise the quality of the recommendations: for users, items or contextual situations that are new to the system, recommendations are hard to compute. We have developed a number of novel techniques to tame this problem, and in particular, new hybrid algorithms that combine several, simpler, algorithms in order to exploit their strengths and avoid their weaknesses. We have also developed algorithms for actively identifying the most useful preference information to ask the user in order to bootstrap the system. Our results obtained from a series of offline and online experiments reveal that the proposed techniques can effectively alleviate the cold-start problem of context-aware recommender systems.
Understanding how high powered ML models arrive at their predictions is an important aspect of Machine Learning, and SHAP is a powerful tool that enables practitioners to understand how different features combine to help a model arrive at a prediction.
This slidedeck is from a presentation given at pydata global on the theoretical foundations of SHAP as well as how to use its library. Link to the presentation can be found here: https://pydata.org/global2021/schedule/presentation/3/behind-the-black-box-how-to-understand-any-ml-model-using-shap/
The document discusses computational approaches to understanding brain mechanisms of action recognition and emotion. It describes experiments using a Hopfield network to model how emotions may emerge from energy regulation in the brain. The network was trained on patterns and tested on contaminated patterns to analyze convergence time and error. Experiments were also conducted using a robot camera and Hopfield network to observe internal dynamics. The document then discusses research on mirror neurons, including experiments recording neurons in monkeys during object observation and execution conditions. Several neurons were identified as potential mirror neuron candidates based on responses during both conditions. Cross-decoding analysis also provided evidence for similar neural representations.
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
Thesis submitted by Andrews Cordolino Sobral at Université de La Rochelle to fulfill the degree of Doctor of Philosophy.
Robust Low-rank and Sparse Decomposition for Moving Object Detection - From Matrices to Tensors
Matthias Feys (ML6) – Bias in ML: A Technical IntroCodiax
This document discusses bias in machine learning. It defines different types of bias, including sample bias, prejudicial bias, exclusion bias, and measurement bias. It also discusses various definitions of fairness, including unawareness, group fairness, and individual fairness. The document outlines group fairness metrics like demographic parity, equal opportunity, and equalized odds. It describes causes of bias and algorithms to mitigate bias, such as reducing bias in data, classifiers, and predictions. Overall, the document provides an overview of bias in ML, definitions of fairness, sources of bias, and methods for debiasing models.
GTC 2021: Counterfactual Learning to Rank in E-commerceGrubhubTech
Many ecommerce companies have extensive logs of user behavior such as clicks and conversions. However, if supervised learning is naively applied, then systems can suffer from poor performance due to bias and feedback loops. Using techniques from counterfactual learning we can leverage log data in a principled manner in order to model user behaviour and build personalized recommender systems. At Grubhub, a user journey begins with recommendations and the vast majority of conversions are powered by recommendations. Our recommender policies can drive user behavior to increase orders and/or profit. Accordingly, the ability to rapidly iterate and experiment is very important. Because of our powerful GPU workflows, we can iterate 200% more rapidly than with counterpart CPU workflows. Developers iterate ideas with notebooks powered by GPUs. Hyperparameter spaces are explored up to 8x faster with multi-GPUs Ray clusters. Solutions are shipped from notebooks to production in half the time with nbdev. With our accelerated DS workflows and Deep Learning on GPUs, we were able to deliver a +12.6% conversion boost in just a few months. In this talk we hope to present modern techniques for industrial recommender systems powered by GPU workflows. First a small background on counterfactual learning techniques, then followed by practical information and data from our industrial application.
By Alex Egg, accepted to Nvidia GTC 2021 Conference
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Towards the next generation of interactive and adaptive explanation methodsKatrien Verbert
This document summarizes a presentation given by Katrien Verbert on explainable artificial intelligence and interactive explanation methods. It discusses Verbert's research group at KU Leuven which focuses on areas like recommender systems, visualization, and intelligent user interfaces. The presentation provides an overview of explainable AI, discussing objectives like explaining model outcomes to increase trust and allowing user interaction with explanations. It describes various recommendation techniques and presents examples of explainable recommendation systems. The presentation discusses how personal user characteristics can impact the effects of explanations and outlines related user studies. Finally, it summarizes several of Verbert's application areas for explainable AI like education, analytics, agriculture, and healthcare, touching on methodologies and results.
Human-centered AI: how can we support end-users to interact with AI?Katrien Verbert
Human-centered AI: how can we support end-users to interact with AI?
This document discusses how to design human-centered AI systems that support end-users. It explores explaining model outcomes to increase trust and acceptance, and enabling users to interact with explanation processes. Personal characteristics like need for cognition impact how users respond to explanations. Explanations should be personalized and allow different levels of detail. Evaluations show explanations improve understanding but also increase cognitive load, so simplification is important. The goal is to preserve human control and ensure AI meets user needs.
Human-centered AI: how can we support lay users to understand AI?Katrien Verbert
The document summarizes research on human-centered AI and how to support lay users in understanding AI. It discusses various research projects that aim to explain model outcomes to increase user trust and acceptance. It explores how personal characteristics like need for cognition can impact the effectiveness of explanations. The research also looks at different application domains for AI like healthcare, education, agriculture and recommendations. It emphasizes the importance of user involvement, personalization and domain expertise in developing AI systems that non-experts can understand and trust.
This document summarizes a presentation given by Katrien Verbert on interactive recommender systems. It provides an overview of Verbert's research group at KU Leuven, which focuses on recommender systems, visualization, and intelligent user interfaces. The presentation describes various techniques for building interactive recommender systems, including explaining recommendations to users, enabling user interaction with the recommendation process, and addressing challenges like diversity, cold starts, and context awareness. It also summarizes several studies conducted by Verbert and collaborators on interactive music and research talk recommender systems.
Explaining recommendations: design implications and lessons learnedKatrien Verbert
The document discusses designing explainable recommender systems, outlining the Augment/HCI research group's work on explaining recommendations to increase user trust, enable interaction with recommendation processes, and providing various application domains for explainable recommender systems including learning analytics, jobs, nutrition, and healthcare. It also discusses challenges and lessons learned in explaining recommendations to users.
Interactive recommender systems: opening up the “black box”Katrien Verbert
This document summarizes a presentation given by Katrien Verbert on interactive recommender systems. It discusses how recommender systems are typically "black boxes" that do not explain their recommendations to users. The presentation aims to open up this black box by exploring ways to increase transparency, user control, and interaction with recommender systems. Examples of interactive recommender systems that allow users to explore the recommendation process and provide explanations are described. Research on developing and evaluating such interactive systems through multiple user studies is summarized. The objective is to enhance user trust and engagement with recommender systems.
This document summarizes Katrien Verbert's presentation on interactive recommender systems. The presentation covered several topics:
1) Different types of recommendation techniques including collaborative filtering, content-based filtering, and knowledge-based filtering.
2) Research on interactive recommender systems that aim to increase transparency, user control, and diversity of recommendations.
3) Several user studies conducted on interactive recommender systems that explored talks and conferences, finding that explanations and various levels of user control can impact user experience.
This document summarizes Katrien Verbert's talk on mixed-initiative recommender systems at the 12th RecSysNL meetup. It discusses how recommender systems can increase user trust and acceptance by explaining recommendations and enabling user interaction with the recommendation process. Examples of Verbert's research include systems like TasteWeights and IntersectionExplorer that provide transparency, user control, and support for exploration in recommender interfaces. Verbert's work also examines how personal characteristics affect user experience with different types and levels of recommender system controllability.
Mixed-initiative recommender systems: towards a next generation of recommende...Katrien Verbert
This document summarizes Katrien Verbert's research experience and interests. It outlines her positions at KU Leuven from 2003 to present, where she has focused on recommender systems, visualization, and learning analytics. Her work aims to make recommendations more understandable and give users more control over the recommendation process. Key projects include TalkExplorer, which visualized recommendations from multiple perspectives, and IntersectionExplorer, which used a Venn diagram to show item relevance across user tags and recommender agents. User studies on these systems found that allowing exploration of item intersections increased effectiveness and user satisfaction with recommendations. The document also provides an overview of Verbert's research topics from 2012 to 2018, which span learning analytics, media consumption, research information systems
This document summarizes Katrien Verbert's research into mixed-initiative recommender systems. It discusses her work on explaining recommendations to increase user trust and enabling user interaction with recommendation processes. Examples of projects include TasteWeights, a visual interactive hybrid recommender, and IntersectionExplorer, which allows users to explore recommendations from multiple perspectives. The document also outlines Verbert's studies on different aspects of interactive recommender systems like transparency, user control, and personalization.
Workshop on Designing Human-Centric MIR Systemsepsilon_tud
The document discusses research on improving music recommendation systems through increased user control, visualizations, and understanding how personal characteristics impact user perceptions. Three experiments were conducted:
1) Experiment 1 examined how user controls alone impacted recommendation acceptance, finding acceptance correlated with users' musical sophistication.
2) Experiment 2 studied the effect of different visualizations on perceived diversity, finding visual memory and musical sophistication interacted with one visualization each.
3) Experiment 3 combined controls and visualizations. Musical sophistication correlated with increased acceptance and perceived diversity for one combined interface. Adding visualization to full user control increased perceived diversity.
The research aims to better understand how personal traits influence the effectiveness of diversity-aware and controllable music
This document discusses the effects of personal characteristics when explaining music recommendations. It presents research questions about how personal characteristics impact user perception and interaction with a music recommender system when explanations are provided. The study design involves measuring participants' personal traits and having them use a music recommender interface that does or does not provide explanations. Results indicate explanations have a significant impact on perception and interaction, and qualitative analysis reveals differences based on traits like need for cognition. Guidelines are proposed based on traits and explanation type.
Human-centered AI: how can we support end-users to interact with AI?Katrien Verbert
The document summarizes research on developing human-centered AI systems that support end-users. It discusses explaining model outcomes to increase trust, enabling user interaction with explanations, and identifying what end-users need through evaluations. Strategies discussed include data-centric explanations, in-situ decision support, explaining model behavior, and addressing different needs in high-stakes domains like healthcare. The goal is developing explanations and interfaces tailored to non-experts through user-centered design and evaluations.
User Control in AIED (Artificial Intelligence in Education)Peter Brusilovsky
This document summarizes research on improving user control and personalization in artificial intelligence for education (AIED) systems. It discusses several AIED systems that provide adaptive navigation support and annotation based on user models while allowing user control over sequencing and navigation. Evaluation of these systems found they can reduce effort, encourage exploration, and increase learning outcomes when users are able to follow or override advice. The document also presents approaches that improve transparency and control through open learner models, controllable ranking, visualization of recommendation models, and balancing adaptation with user exploration.
The FACT platform is an open, federated AI system that evaluates news streams, assigns trust ratings to content and sources, and adjusts these ratings over time based on new stories. It includes memory and intelligence engines to generate narratives, produce counterfactuals, and rate the trustworthiness of articles. FACT is a distributed platform that federates through self-organization and novel human-AI interaction design. Its target audiences are citizens, journalists, and civic writers. The first year goals are to develop the FACT platform, run experiments with 500+ citizens, and launch a FACT reporting channel. The core team developing FACT has expertise in AI, computational modeling, and evaluating digital platforms and algorithms.
This Presentation contains Project idea along with the project diagrams and methodology explained. This Project can be used in Different sectors like in Industry, in Prediction analysis, for trend analysis, for sales & profit calculations etc.
Leveraging Graph Neural Networks for User Profiling: Recent Advances and Open...Erasmo Purificato
Slide of the tutorial entitled "Leveraging Graph Neural Networks for User Profiling: Recent Advances and Open Challenges" held at CIKM'23: 32nd ACM International Conference on Information and Knowledge Management (October 21, 2023 | Birmingham, United Kingdom)
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdfKayla Smith
This document is the thesis submitted by Isaac Wiafe to the University of Reading for the degree of Doctor of Philosophy. It presents a framework called the Unified Framework for Analysing, Designing and Evaluating persuasive technology (U-FADE). The framework expands on the Persuasive Systems Design model to provide steps for developing persuasive technology applications. It incorporates the 3-Dimensional Relationship between Attitude and Behaviour model, which analyzes the levels of cognitive dissonance of users to identify their state and craft persuasive messages. The thesis was validated through a case study demonstrating the U-FADE and 3D-RAB models are effective for persuasive technology design.
Mediated participatory design for contextually aware in vehicle experiencesStavros Tasoudis
Automotive UI 2016, 8th international conference in automotive user interfaces and vehicular applications, work in progress presentation of Stavros Tasoudis.
This document summarizes Katrien Verbert's presentation on designing learning analytics dashboards. Some key points include:
1) Verbert discussed lessons learned from designing dashboards and the importance of involving end-users to create interfaces tailored to their needs.
2) Important challenges in dashboard design are providing actionable feedback rather than just warnings, and balancing personalization with simplification.
3) Dashboards should be evaluated for how they support the advisor-student dialogue and whether they contribute to understanding a student's path from effort to outcomes. Explaining recommendations and visualizing participation can increase trust and awareness.
Semelhante a Human-centered AI: towards the next generation of interactive and adaptive explanation methods (20)
The document discusses explainable AI (XAI) methods. It defines XAI both narrowly as techniques that explain model decisions and broadly as anything that increases AI understandability. The document outlines intrinsically interpretable and post-hoc explanation methods like LIME and SHAP that explain complex models. It emphasizes the importance of explanations being actionable, contextualized and developed with stakeholder input. The document presents examples of XAI dashboards and concludes with recommendations to involve end-users and provide personalized, simplified explanations.
Explaining job recommendations: a human-centred perspectiveKatrien Verbert
This document summarizes Katrien Verbert's presentation on explaining job recommendations from a human-centered perspective. The presentation discusses (1) the need to explain job recommendation models to increase user trust and acceptance, (2) using explanation methods like visualizations to enable user interaction with explanations and improve models, and (3) designing explanations of a job recommendation system to increase user empowerment, clarify recommendations, and support job mediators. The research aims to balance explanation, exploration, and actionable insights when interacting with recommender systems.
This document discusses using augmented reality and recommendation techniques to promote healthier food choices. It proposes combining recommendation systems, visualization of nutritional information, and augmented reality to support decision making at grocery stores. A user study tested an augmented reality application called PHARA that provided nutritional information and recommendations for food products on a HoloLens and smartphone. Results found the application led users to select healthier options over time and that interface layouts tailored for each device led to better performance and user experience. The work aims to eventually influence long-term healthy shopping behavior through such in-situ recommendations and motivational design.
Explaining and Exploring Job Recommendations: a User-driven Approach for Inte...Katrien Verbert
This document describes a user-driven approach for interacting with a knowledge-based job recommender system called the Labor Market Explorer. The Explorer was designed based on a user-centered process to provide job seekers with explanations of recommendations, exploration and control over diverse recommendations, and actionable insights. An evaluation with 66 job seekers found that the Explorer effectively empowered users to explore recommendations. Personal characteristics like age and background impacted how users interacted with the interface. The design process and key features of the Explorer could inform future job recommendation systems.
Interactive recommender systems and dashboards for learningKatrien Verbert
The document summarizes Katrien Verbert's research interests which include interactive recommender systems, learning analytics dashboards, and intelligent user interfaces. Some key points:
- Her team at KU Leuven studies how to visualize learner data to help students explore connections between effort and outcomes.
- Their research also looks at designing dashboards to promote balanced participation in classroom discussions and support advisor-student dialogues.
- Interactive recommender systems that allow users to provide feedback and explore recommendations are another focus to improve recommendations and increase user trust.
- Future work may explore applying these areas to reskilling employees and using augmented/virtual reality in education.
- HTML pages use tags to describe the structure and semantic meaning of content. Tags come in pairs, with opening and closing tags.
- Common tags include headings, paragraphs, lists, links, images, and tables. Elements like <strong> and <em> provide semantic information about emphasized or important text.
- Learning HTML involves understanding the purpose of each tag and how to structure content using the appropriate tags. Tags help convey meaning to users and search engines.
Information Visualisation: perception and principlesKatrien Verbert
This document discusses principles of information visualization and perception. It covers topics like moving illusions, pre-attentive processing, magnitude estimation, Gestalt grouping principles, color perception, and guidelines for effective use of color in visualization. Examples are provided to illustrate concepts like encoding methods, perceptual scaling, simultaneous contrast effects, and chromostereopsis. Readings on color selection and information visualization are recommended.
Agents vs Users: Visual Recommendation of Research Talks with Multiple Dimens...Katrien Verbert
Published in ACM TiiS: Verbert, K., Parra, D., & Brusilovsky, P. (2016). Agents Vs. Users: Visual Recommendation of Research Talks with Multiple Dimension of Relevance. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(2), 11.
Presented at IUI 2017
Scalable Exploration of Relevance Prospects to Support Decision MakingKatrien Verbert
Presented at IntRS 2016 - Interfaces and Human Decision Making for Recommender Systems, workshop at RecSys 2016
Citation: Verbert, K., Seipp, K., He, C., Parra, D., Wongchokprasitti, C., & Brusilovsky, P. (2016). Scalable Exploration of Relevance Prospects to Support Decision Making. Proceedings of the Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with ACM Conference on Recommender Systems (RecSys 2016), Boston, MA, USA, September 16, 2016.
The document summarizes details about the EC-TEL 2016 conference on adaptive and adaptable learning that took place in Lyon, France from September 13-16, 2016. It provides information on the chairs, sponsors, submissions received, acceptance rates, program, and social events of the conference. 148 papers were submitted from authors in over 30 countries, with acceptance rates of around 25% for full papers. The program included keynotes on adaptivity in learning technologies and educational robots, as well as a panel on artificial intelligence in education. Social events included a welcome ceremony, soccer tournament, and guided cruise on the River Saône.
This document discusses open science in the digital humanities. It defines an open scholar as someone who makes their intellectual projects and processes digitally visible and invites ongoing criticism and secondary uses of their work. It also discusses open content, learning, analytics, accreditation and data. Ensuring open culture involves using creative commons licenses without commercial restrictions, making data independent of interfaces, educating academics, engaging the public, and making open culture sustainable.
This document provides an overview of visual analytics as presented in a lecture. It discusses the motivation and goals of visual analytics, which aims to facilitate analytical reasoning through interactive visual interfaces. This is done by combining automated analysis techniques with interactive visualizations. The document outlines the history and development of visual analytics as a field, provides examples of challenges and applications, and discusses key aspects of the visual analytics process such as linking multiple views, temporal views, and labeling.
ARENA - Young adults in the workplace (Knight Moves).pdfKnight Moves
Presentations of Bavo Raeymaekers (Project lead youth unemployment at the City of Antwerp), Suzan Martens (Service designer at Knight Moves) and Adriaan De Keersmaeker (Community manager at Talk to C)
during the 'Arena • Young adults in the workplace' conference hosted by Knight Moves.
Explore the essential graphic design tools and software that can elevate your creative projects. Discover industry favorites and innovative solutions for stunning design results.
International Upcycling Research Network advisory board meeting 4Kyungeun Sung
Slides used for the International Upcycling Research Network advisory board 4 (last one). The project is based at De Montfort University in Leicester, UK, and funded by the Arts and Humanities Research Council.
Human-centered AI: towards the next generation of interactive and adaptive explanation methods
1. Human-centered AI: towards the next generation
of interactive and adaptive explanation methods
IHM 2022 – 8 April 2022
Katrien Verbert
Augment/HCI – Department of Computer Science - KU Leuven
@katrien_v
2. Human-Computer Interaction group
Explainable AI - recommender systems – visualization – intelligent user interfaces
Learning analytics &
human resources
Media
consumption
Precision agriculture
Healthcare
Augment Katrien Verbert
ARIA Adalberto Simeone
Computer
Graphics
Phil Dutré
LIIR Sien Moens
E-media
Vero Vanden Abeele
Luc Geurts
Kathrin Gerling
3. Augment/HCI team
Robin De Croon
Postdoc researcher
Katrien Verbert
Associate Professor
Francisco Gutiérrez
Postdoc researcher
Tom Broos
PhD researcher
Nyi Nyi Htun
Postdoc researcher
Houda Lamqaddam
PhD researcher
Oscar Alvarado
Postdoc researcher
https://augment.cs.kuleuven.be/
Diego Rojo Carcia
PhD researcher
Maxwell Szymanski
PhD researcher
Arno Vanneste
PhD researcher
Jeroen Ooge
PhD researcher
Aditya Bhattacharya
PhD researcher
Ivania Donoso Guzmán
PhD researcher
4. Explainable Artificial Intelligence (XAI)
“Given an audience, an explainable artificial
intelligence is one that produces details or reasons
to make its functioning clear or easy to understand.”
[Arr20]
4
[Arr20] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial
Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
5. q Explaining model outcomes to increase user trust and acceptance
q Enable users to interact with the explanation process to improve the model
Research objectives
Models
12. Explanations
12
Millecamp, M., Htun, N. N., Conati, C., & Verbert, K. (2019, March). To explain or not to explain:
the effects of personal characteristics when explaining music recommendations. In
Proceedings of the 2019 Conference on Intelligent User Interface (pp. 397-407). ACM.
13. Personal characteristics
Need for cognition
•Measurement of the tendency for an individual to engage in, and enjoy, effortful cognitive
activities
•Measured by test of Cacioppo et al. [1984]
Visualisation literacy
•Measurement of the ability to interpret and make meaning from information presented in the form
of images and graphs
•Measured by test of Boy et al. [2014]
Locus of control (LOC)
•Measurement of the extent to which people believe they have power over events in their lives
•Measured by test of Rotter et al. [1966]
Visual working memory
•Measurement of the ability to recall visual patterns [Tintarev and Mastoff, 2016]
•Measured by Corsi block-tapping test
Musical experience
•Measurement of the ability to engage with music in a flexible, effective and nuanced way
[Müllensiefen et al., 2014]
•Measured using the Goldsmiths Musical Sophistication Index (Gold-MSI)
Tech savviness
•Measured by confidence in trying out new technology 13
14. User study
¤ Within-subjects design: 105 participants recruited with Amazon Mechanical Turk
¤ Baseline version (without explanations) compared with explanation interface
¤ Pre-study questionnaire for all personal characteristics
¤ Task: Based on a chosen scenario for creating a play-list, explore songs and
rate all songs in the final playlist
¤ Post-study questionnaire:
¤ Recommender effectiveness
¤ Trust
¤ Good understanding
¤ Use intentions
¤ Novelty
¤ Satisfaction
¤ Confidence
16. Design implications
¤ Explanations should be personalised for different groups of
end-users.
¤ Users should be able to choose whether or not they want to
see explanations.
¤ Explanation components should be flexible enough to present
varying levels of details depending on a user’s preference.
16
17. User control
Users tend to be more satisfied when they have control over
how recommender systems produce suggestions for them
Control recommendations
Douban FM
Control user profile
Spotify
Control algorithm parameters
TasteWeights
18. Controllability Cognitive load
Additional controls may increase cognitive load
(Andjelkovic et al. 2016)
Ivana Andjelkovic, Denis Parra, andJohn O’Donovan. 2016. Moodplay: Interactive mood-based
music discovery and recommendation. In Proc. of UMAP’16. ACM, 275–279.
19. Different levels of user control
19
Level
Recommender
components
Controls
low
Recommendations
(REC)
Rating, removing, and
sorting
medium User profile (PRO)
Select which user profile
data will be considered by
the recommender
high
Algorithm parameters
(PAR)
Modify the weight of
different parameters
Jin, Y., Tintarev, N., & Verbert, K. (2018, September). Effects of personal characteristics on music
recommender systems with different levels of controllability. In Proceedings of the 12th ACM Conference
on Recommender Systems (pp. 13-21). ACM.
20. User profile (PRO) Algorithm parameters (PAR) Recommendations (REC)
8 control settings
No control
REC
PAR
PRO
REC*PRO
REC*PAR
PRO*PAR
REC*PRO*PAR
21. Evaluation method
¤ Between-subjects – 240 participants recruited with AMT
¤ Independent variable: settings of user control
¤ 2x2x2 factorial design
¤ Dependent variables:
¤ Acceptance (ratings)
¤ Cognitive load (NASA-TLX), Musical Sophistication, Visual Memory
¤ Framework Knijnenburg et al. [2012]
22. Results
¤ Main effects: from REC to PRO to PAR → higher cognitive
load
¤ Two-way interaction: does not necessarily result in higher
cognitive load. Adding an additional control component
to PAR increases the acceptance. PRO*PAR has less
cognitive load than PRO and PAR
¤ High musical sophistication leads to higher quality, and
thereby result in higher acceptance
22
Jin, Y., Tintarev, N., & Verbert, K. (2018, September). Effects of personal characteristics on music
recommender systems with different levels of controllability. In Proceedings of the 12th ACM
Conference on Recommender Systems (pp. 13-21). ACM.
26. Explaining exercise recommendations
How to automatically
adapt the exercise
recommending on Wiski to
the level of students?
How do (placebo)
explanations affect initial
trust in Wiski for
recommending exercises?
Goals and research questions
Automatic
adaptation
Explanations & trust
Young target
audience
Middle and high school
students
Ooge, J., Kato, S., Verbert, K. (2022) Explaining Recommendations in E-Learning: Effects on
Adolescents' Initial Trust. Proceedings of the 27th IUI conference on Intelligent User Interfaces
27. User-centred design of explanations: 3
iterations & think-alouds
Tutorial for full transparency Single-screen explanation Final explanation interface
29. Results: Real explanations…
… did increase multidimensional initial trust
… did not increase one-dimensional initial trust
… led to accepting more recommended exercises
compared to both placebo and no explanations
30. Results: No explanations
Can be acceptable in low-stakes situations (e.g.,
drilling exercises):
indications of difficulty level might suffice
Personal level
indication: Easy,
Medium and Hard tags
32. 32
uncertainty
Gutiérrez Hernández F., Seipp K., Ochoa X., Chiluiza K., De Laet T., Verbert K. (2018). LADA: A
learning analytics dashboard for academic advising. Computers in Human Behavior, pp 1-13. doi:
10.1016/j.chb.2018.12.004
LADA: a learning analytics dashboard
for study advisors
34. Results
¤ LADA was perceived as a valuable tool for more
accurate and efficient decision making.
¤ LADA enables expert advisers to evaluate significantly
more scenarios.
¤ More transparency in the prediction model is required in
order to increase trust.
34
Gutiérrez Hernández F., Seipp K., Ochoa X., Chiluiza K., De Laet T., Verbert K. (2018). LADA: A
learning analytics dashboard for academic advising. Computers in Human Behavior, pp 1-13. doi:
10.1016/j.chb.2018.12.004
37. AHMoSe
Rojo, D., Htun, N. N., Parra, D., De Croon, R., & Verbert, K. (2021). AHMoSe: A knowledge-based visual
support system for selecting regression machine learning models. Computers and Electronics in
Agriculture, 187, 106183.
39. Case Study – Grape Quality Prediction
39
¤ Grape Quality Prediction Scenario
[Tag14]
¤ Data
¤ Years 2010, 2011 (train) 2012 (test)
¤ 48 cells (Central Greece)
¤ Knowledge-based rules
[Tag14] Tagarakis, A., et al. "A fuzzy inference system to model grape
quality in vineyards." Precision Agriculture 15.5 (2014): 555-578. Source: [Tag14]
40. Simulation Study
¤ AHMoSe vs full AutoML approach to support model
selection.
40
RMSE (AutoML) RMSE (AHMoSe) Difference %
Scenario A
Complete
Knowledge
0.430 0.403 ▼ 6.3%
Scenario B
Incomplete
Knowledge
0.458 0.385 ▼ 16.0%
41. Qualitative Evaluation
¤ 10 open ended questions
¤ 5 viticulture experts and 4 ML experts.
¤ Thematic Analysis: potential use cases, trust, usability,
and understandability.
42. Qualitative Evaluation - Trust
42
¤ Showing the dis/agreement of model outputs with
expert’s knowledge can promote trust.
“The thing that makes us trust the models is the fact that most of the
time, there is a good agreement between the values predicted by the
model and the ones obtained for the knowledge of the experts.”
– Viticulture Expert
45. Predicting duration to find a job
45
Key Issues: Missing data, prediction trust issues, job
seeker motivation, lack of control.
46. Methods
¤ A Customer Journey approach. (5 mediators).
¤ Hands-on time with the original dashboard (22 mediators).
¤ Observations of mediation sessions. (3 mediators, 6 job seekers).
¤ Questionnaire regarding perception of the dashboard and
prediction model (15 Mediators).
46
Charleer S., Gutiérrez Hernández F., Verbert K. (2018). Supporting job mediator and job seeker
through an actionable dashboard. In: Proceedings of the 24th IUI conference on Intelligent User
Interfaces Presented at the ACM IUI 2019, Los Angeles, USA.
48. Take away messages
¤ Key difference between actionable and non-actionable
parameters
¤ Need for customization and contextualization.
¤ The human expert plays a crucial role when interpreting
and relaying in the predicted or recommended output.
48
Charleer S., Gutiérrez Hernández F., Verbert K. (2019). Supporting job mediator and job
seeker through an actionable dashboard. In: Proceedings of the 24th IUI conference on
Intelligent User Interfaces Presented at the ACM IUI 2019, Los Angeles, USA. (Core: A)
55. Design and evaluation
55
Gutiérrez F., Cardoso B., Verbert K. (2017). PHARA: a personal health augmented reality assistant to
support decision-making at grocery stores. In: Proceedings of the International Workshop on Health
Recommender Systems co-located with ACM RecSys 2017 (Paper No. 4) (10-13).
56. Results
¤ PHARA allows users to make informed decisions, and
resulted in selecting healthier food products.
¤ Stack layout performs better with HMD devices with a
limited field of view, like the HoloLens, at the cost of some
affordances.
¤ The grid and pie layouts performed better in handheld
devices, allowing to explore with more confidence,
enjoyability and less effort.
56
Gutiérrez Hernández F., Htun NN., Charleer S., De Croon R., Verbert K. (2018). Designing
augmented reality applications for personal health decision-making. In: Proceedings of the 2019
52nd Hawaii International Conference on System Sciences Presented at the HICSS, Hawaii, 07
Jan 2019-11 Jan 2019.
57. Biofortification info
Plants to cultivate
Ongoing work: PERNUG
¤ Increased access to more nutritious plants
¤ Improved iron and B12 intakes for vegan and vegetarian
subgroups
Consumer app with recipe
recommendations
Hydroponic system with
biofortified plants
https://www.eitfood.eu/projects/pernug
62. 62
Gutiérrez Hernández, F.S., Htun, N.N., Vanden Abeele, V., De Croon, R., Verbert, K.
(2022). Explaining call recommendations in nursing homes: a user-centered design
approach for interacting with knowledge-based health decision support systems.
Proceedings of IUI 2022.
63. Evaluation
¤ 12 nurses used the app for three months
¤ Data collection
¤ Interaction logs
¤ Resque questions
¤ Semi-structured interviews
63
65. Results
¤ Iterative design process identified several important features, such as the pending
list, overview and the feedback shortcut to encourage feedback.
¤ Explanations seem to contribute well to better support the healthcare
professionals.
¤ Results indicate a better understanding of the call notifications by being able to
see the reasons of the calls.
¤ More trust in the recommendations and increased perceptions of transparency
and control
¤ Interaction patterns indicate that users engaged well with the interface, although
some users did not use all features to interact with the system.
¤ Need for further simplification and personalization.
65
68. Explaining health
recommendations
¤ 6 different explanation designs
¤ Explain WHY users are given a
certain recommendation for
their (chronic) pain based on
their inputs
68
Maxwell Szymanski, Vero Vanden Abeele and Katrien Verbert Explaining
health recommendations to lay users: The dos and don’ts – Apex-IUI 2022
74. Results
“Insight vs. information overload”
¤ Most users prefer more information (holistic overview of inputs)
¤ However, some users experienced information overload
→ Future work - Do personal characteristics such as NFC
influence this?
74
79. Take-away messages
¤ Involvement of end-users has been key to come up with
interfaces tailored to the needs of non-expert users
¤ Actionable vs non-actionable parameters
¤ Domain expertise of users and need for cognition
important personal characteristics
¤ Need for personalisation and simplification
79
80. Peter Brusliovsky Nava Tintarev Cristina Conati
Denis Parra
Collaborations
Jurgen Ziegler
Gregor Stiglic