Benchmarking and Deeper Analysis of Adversarial Patch Attack on Object Detectors

Abstract

Adversarial attacks (either norm bounded or patch-based) have received much attention from the computer vision community over the last decade. The criticality of those attacks in the physical world, however, is questionable. Indeed, none of the proposed attacks in the literature has been demonstrated in a realistic physical implementation verifying simultaneously significant contextual effects, radiometric and geometrical robustness in either black or gray box settings. To advance this issue, in this paper we propose an evaluation framework for patch attacks against object detectors. This framework focuses on robustness and transferability properties by considering various image transformations and learning conditions. We validate our framework on three state-of-the-art patch attacks using PASCAL VOC dataset, providing a more comprehensive view of their criticality.

Publication
Workshop Artificial Intelligence Safety-AI Safety (IJCAI-ECAI Conference)