Extremely Simple Activation Shaping for Out-of-Distribution Detection

TL;DR

At inference time, pick a layer, simplify its representation, feed it through the rest of the network. Accuracy is not affected and OOD detection is much better!

Abstract

The separation between training and deployment of machine learning models implies that not all scenarios encountered in deployment can be anticipated during training, and therefore relying solely on advancements in training has its limits. Out-of-distribution (OOD) detection is an important area that stress-tests a model's ability to handle unseen situations: Do models know when they don't know? Existing OOD detection methods either incur extra training steps, additional data or make nontrivial modifications to the trained network. In contrast, in this work, we propose an extremely simple, post-hoc, on-the-fly activation shaping method, ASH, where a large portion (e.g. 90%) of a sample's activation at a late layer is removed, and the rest (e.g. 10%) simplified or lightly adjusted. The shaping is applied at inference time, and does not require any statistics calculated from training data. Experiments show that such a simple treatment enhances in-distribution and out-of-distribution sample distinction so as to allow state-of-the-art OOD detection on ImageNet, and does not noticeably deteriorate the in-distribution accuracy. We release alongside the paper two calls for explanation and validation, believing the collective power to further validate and understand the discovery.

Venue
In Eleventh International Conference on Learning Representations (ICLR 2020).
BibTeX
@article{djurisic2022ash,
  title={Extremely Simple Activation Shaping for Out-of-Distribution Detection},
  author={Djurisic, Andrija and Bozanic, Nebojsa and Ashok, Arjun and Liu, Rosanne},
  journal={arXiv preprint arXiv:2209.09858},
  year={2022}
}
Date