Human Trust Modeling for Bias Mitigation in Artificial Intelligence

Abstract

Human-in-the-loop model-building processes are increasingly popular as they incorporate human intuition and not easily externalized domain knowledge. However, we argue that the inclusion of the human, and in particular direct model manipulations, result in a high risk of creating biased models and results. We present a new approach of "Human Trust Modeling" that lets systems model the users' intentions and deduce whether they have understood the underlying modeling processes and act coherently. Using this trust model, systems can enable or disable and encourage or discourage interactions to mitigate bias.

Publication
Proc. of ACM CHI Workshop on Where is the Human? Bridging the Gap Between AI and HCI

Related