Faster Neural Networks Straight from JPEG

TL;DR

We found more natural input spaces (JPEG internal representations) for images. It helps make smaller, faster, and more accurate CNNs.

Abstract

The simple, elegant approach of training convolutional neural networks (CNNs) directly from RGB pixels has enjoyed overwhelming empirical success. But can more performance be squeezed out of networks by using different input representations? In this paper we propose and explore a simple idea: train CNNs directly on the blockwise discrete cosine transform (DCT) coefficients computed and available in the middle of the JPEG codec. Intuitively, when processing JPEG images using CNNs, it seems unnecessary to decompress a blockwise frequency representation to an expanded pixel representation, shuffle it from CPU to GPU, and then process it with a CNN that will learn something similar to a transform back to frequency representation in its first layers. Why not skip both steps and feed the frequency domain into the network directly? In this paper we modify libjpeg to produce DCT coefficients directly, modify a ResNet-50 network to accommodate the differently sized and strided input, and evaluate performance on ImageNet. We find networks that are both faster and more accurate, as well as networks with about the same accuracy but 1.77x faster than ResNet-50.

Venue
In Thirty-second Conference on Neural Information Processing Systems (NeurIPS 2018).
BibTeX
@inproceedings{gueguen2018faster,
  title={Faster neural networks straight from jpeg},
  author={Gueguen, Lionel and Sergeev, Alex and Kadlec, Ben and Liu, Rosanne and Yosinski, Jason},
  booktitle={Advances in Neural Information Processing Systems},
  pages={3933–3944},
  year={2018}
}
Date