Human-in-the-loop model-building processes are increasingly popular as they incorporate human intuition and not easily externalized domain knowledge. However, we argue that the inclusion of the human, and in particular direct model manipulations, result in a high risk of creating biased models and results. We present a new approach of "Human Trust Modeling" that lets systems model the users' intentions and deduce whether they have understood the underlying modeling processes and act coherently. Using this trust model, systems can enable or disable and encourage or discourage interactions to mitigate bias.