Natural Adversarial Objects

TL;DR

A naturally adversarial dataset for object detection. Think ImageNet-A (from paper "Natural Adversarial Examples") but for object detection.

Abstract

Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. The mean average precision (mAP) of EfficientDet-D7 drops 74.5% when evaluated on NAO compared to the standard MSCOCO validation set. Moreover, by comparing a variety of object detection architectures, we find that better performance on MSCOCO validation set does not necessarily translate to better performance on NAO, suggesting that robustness cannot be simply achieved by training a more accurate model.  We further investigate why examples in NAO are difficult to detect and classify. Experiments of shuffling image patches reveal that models are overly sensitive to local texture. Additionally, using integrated gradients and background replacement, we find that the detection model is reliant on pixel information within the bounding box, and insensitive to the background context when predicting class labels.

Venue
In Data Centric AI Workshop, NeurIPS 2021
BibTeX
@article{lau2021natural,
  title={Natural Adversarial Objects},
  author={Felix Lau and Nishant Subramani and Sasha Harrison and Aerin Kim and Elliot Branson and Rosanne Liu},
  year={2021},
  eprint={2111.04204},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}
Date