LCA: Loss Change Allocation for Neural Network Training

TL;DR

What actually happens when a model "trains"? What is each parameter's contribution to the loss change? LCA helps you catch "good" and "bad" weights during training.

Abstract

Neural networks enjoy widespread use, but many aspects of their training, representation, and operation are poorly understood. In particular, our view into the training process is limited, with a single scalar loss being the most common viewport into this high-dimensional, dynamic process. We propose a new window into training called Loss Change Allocation (LCA), in which credit for changes to the network loss is conservatively partitioned to the parameters. This measurement is accomplished by decomposing the components of an approximate path integral along the training trajectory using a Runge-Kutta integrator. This rich view shows which parameters are responsible for decreasing or increasing the loss during training, or which parameters "help" or "hurt" the network's learning, respectively. LCA may be summed over training iterations and/or over neurons, channels, or layers for increasingly coarse views. This new measurement device produces several insights into training. (1) We find that barely over 50% of parameters help during any given iteration. (2) Some entire layers hurt overall, moving on average against the training gradient, a phenomenon we hypothesize may be due to phase lag in an oscillatory training process. (3) Finally, increments in learning proceed in a synchronized manner across layers, often peaking on identical iterations.

Venue
In Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019).
BibTeX
@inproceedings{lan2019lca,
  title={LCA: Loss change allocation for neural network training},
  author={Lan, Janice and Liu, Rosanne and Zhou, Hattie and Yosinski, Jason},
  booktitle={Advances in Neural Information Processing Systems},
  pages={3614–3624},
  year={2019}
}
Date