1

ExpLIMEable: An exploratory framework for LIME

ExpLIMEable is a tool to enhance the comprehension of Local Interpretable Model-Agnostic Explanations (LIME), particularly within the realm of medical image analysis. LIME explanations often lack robustness due to variances in perturbation techniques …

GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations

GInX-Eval, an evaluation procedure of GNN in-distribution explanations, overcoming the limitations of faithfulness metrics.

A Diachronic Perspective on User Trust in AI under Uncertainty

In a human-AI collaboration, users build a mental model of the AI system based on its reliability and how it presents its decision, e.g. its presentation of system confidence and an explanation of the output. Modern NLP systems are often …

Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges

The growing popularity of generative language models has amplified interest in interactive methods to guide model outputs. Prompt refinement is considered one of the most effective means to influence output among these methods. We identify several …

Explore, Compare, and Predict Investment Opportunities through What-If Analysis: US Housing Market Investigation

A key challenge in data analysis tools for domain-specific applications with high-dimensional time series data is to provide an intuitive way for users to explore their datasets, analyze trends and understand the models developed for these applications through human-computer interaction.

RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback

To use reinforcement learning from human feedback (RLHF) in practical applications, it is crucial to learn reward models from diverse sources of human feedback and to consider human factors involved in providing feedback of different types. However, systematic study of learning from diverse types of feedback is held back by limited standardized tooling available to researchers. To bridge this gap, we propose RLHF-Blender, a configurable, interactive interface for learning from human feedback.

xai-primer.com — A Visual Ideation Space of Interactive Explainers

A summary of the main functions of the XAI Primer. The design space is structured into three layers, namely: the cluster layer (a), the item layer (b) and the network layer (c) that can be explored in an open-ended and serendipitous way. Guided tours …

Explaining Contextualization in Language Models using Visual Analytics

Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn. In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function …

Curating Publications as Artefacts—Exploring Machine Learning Research in an Interactive Virtual Museum

The need for innovating scientific publications is felt across various research fields. In the last thirty years, the publishing process is accelerating, and new research is coming out every day. In Machine Learning, some attempts have been made to …

Speculative Execution of Similarity Queries: Real-Time Parameter Optimization through Visual Exploration

The parameters of complex analytical models often have an unpredictable influence on the models’ results, rendering parameter tuning a non-intuitive task. By concurrently visualizing both the model and its results, visual analytics tackles this …