Interpretable Image Classification through an Argumentative Dialog between Encoders

Abstract

We address the problem of designing interpretable algorithms for image classification. Modern computer vision algorithms implement classification in two phases: feature extraction - the encoding - that relies on deep neural networks (DNN), followed by a task-oriented decision - the decoding - often also using a DNN. We propose to formulate this last phase as an argumentative DialoguE Between two agents relying on visual ATtributEs and Similarity to prototypes (DEBATES). DEBATES represents the combination of information provided by two encoders in a transparent and interpretable way. It relies on a dual process that combines similarity to prototypes and visual attributes, each extracted from an encoder. DEBATES makes explicit the agreements and conflicts between the two encoders managed by the two agents, reveals the causes of unintended behaviors, and helps identify potential corrective actions to improve performance. The approach is demonstrated on two problems of fine-grained image classification.

Publication
27TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE