Minions, Sheep, and Fruits: Metaphorical Narratives to Explain Artificial Intelligence and Build Trust


Advanced artificial intelligence models are used to solve complex real-world problems across different domains. While bringing along the expertise for their specific domain problems, users from these various application fields often do not readily understand the underlying artificial intelligence models. The resulting opacity implicates a low level of trust of the domain expert, leading to an ineffective and hesitant usage of the models. We postulate that it is necessary to educate the domain experts to prevent such situations. Therefore, we propose the metaphorical narrative methodology to transitively conflate the mental models of the involved modeling and domain experts. Metaphorical narratives establish an uncontaminated, unambiguous vocabulary that simplifies and abstracts the complex models to explain their main concepts. Elevating the domain experts in their methodological understanding results in trust building and an adequate usage of the models. To foster the methodological understanding, we follow the Visual Analytics paradigm that is known to provide an effective interface for the human and the machine. We ground our proposed methodology on different application fields and theories, detail four successfully applied metaphorical narratives, and discuss important aspects, properties, and pitfalls.

Proc. of IEEE VIS Workshop on Visualization for AI Explainability (VISxAI)