International Workshop on Computational Argumentation and Cognition
COGNITAR 2020 - Talks
Leveraging cognitive constraints for interpretable AI by Sangeet Khemlani
Interactive AI agents must take into account the cognitive limitations of their human teammates. To communicate effectively and avoid overburdening teammates, agents must anticipate their cognitive and conceptual constraints. One viable way of building intepretable AI agents is to base computations on the sparse, low-fidelity representations that people rely on when they reason. Cognitive scientists argue that people reason about space, time, and causality by building mental models, i.e., iconic mental simulations, and that they tend to keep in mind just one model at a time. I show how computing with mental models can allow systems to implement interpretable reasoning behavior. In particular, I will show how existing machine learning technologies for object recognition can leverage mental models to implement a wide variety of patterns in interpreting spatial relations. I will conclude by showing how a similar methodology can help build interpretable agents that can make temporal, explanatory, and causal inferences.