Prior to that, I was a research associate and doctoral student in the group for Data Analysis and Visualization at the University of Konstanz (Germany) and in the Visualization for Information Analysis lab at the OntarioTech University (Canada).
& Co-Organized Events
Workshop on Visualization as Added Value in the Development, Use, and Evaluation of Language Resources.
Manually investigating sheet music collections is challenging for music analysts due to the magnitude and complexity of underlying features, structures, and contextual information. However, applying sophisticated algorithmic methods would require advanced technical expertise that analysts do not necessarily have. Bridging this gap, we contribute CorpusVis, an interactive visual workspace, enabling scalable and multi-faceted analysis. Our proposed visual analytics dashboard provides access to computational methods, generating varying perspectives on the same data. The proposed application uses metadata including composers, type, epoch, and low-level features, such as pitch, melody, and rhythm. To evaluate our approach, we conducted a pair-analytics study with nine participants. The qualitative results show that CorpusVis supports users in performing exploratory and confirmatory analysis, leading them to new insights and findings. In addition, based on three exemplary workflows, we demonstrate how to apply our approach to different tasks, such as exploring musical features or comparing composers.
Explainable AI aims to render model behavior understandable by humans, which can be seen as an intermediate step in extracting causal relations from correlative patterns. Due to the high risk of possible fatal decisions in image-based clinical diagnostics, it is necessary to integrate explainable AI into these safety-critical systems. Current explanatory methods typically assign attribution scores to pixel regions in the input image, indicating their importance for a model’s decision. However, they fall short when explaining why a visual feature is used. We propose a framework that utilizes interpretable disentangled representations for downstream-task prediction. Through visualizing the disentangled representations, we enable experts to investigate possible causation effects by leveraging their domain knowledge. Additionally, we deploy a multi-path attribution mapping for enriching and validating explanations. We demonstrate the effectiveness of our approach on a synthetic benchmark suite and two medical datasets. We show that the framework not only acts as a catalyst for causal relation extraction but also enhances model robustness by enabling shortcut detection without the need for testing under distribution shifts.
Music analysis tasks, such as structure identification and modulation detection, are tedious when performed manually due to the complexity of the common music notation (CMN). Fully automated analysis instead misses human intuition about relevance. Existing approaches use abstract data-driven visualizations to assist music analysis but lack a suitable connection to the CMN. Therefore, music analysts often prefer to remain in their familiar context. Our approach enhances the traditional analysis workflow by complementing CMN with interactive visualization entities as minimally intrusive augmentations. Gradual step-wise transitions empower analysts to retrace and comprehend the relationship between the CMN and abstract data representations. We leverage glyph-based visualizations for harmony, rhythm and melody to demonstrate our technique’s applicability. Designdriven visual query filters enable analysts to investigate statistical and semantic patterns on various abstraction levels. We conducted pair analytics sessions with 16 participants of different proficiency levels to gather qualitative feedback about the intuitiveness, traceability and understandability of our approach. The results show that MusicVis supports music analysts in getting new insights about feature characteristics while increasing their engagement and willingness to explore.