Training language models on the knowledge graph: Insights on hallucinations and their detectability

TL;DR

Larger and longer-trained LMs hallucinate less on fixed data, but their remaining hallucinations become harder to detect.

Abstract

While many capabilities of language models (LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood. Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations where a correct answer appears verbatim in the training set. To fully control the training data content, we construct a knowledge graph (KG)-based dataset, and use it to train a set of increasingly large LMs. We find that for a fixed dataset, larger and longer-trained LMs hallucinate less. However, hallucinating on <=5% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, than Hoffmann et al. (2022) reported was optimal. Given this costliness, we study how hallucination detectors depend on scale. While we see detector size improves performance on fixed LM outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations.

Venue
First Conference on Language Modeling
BibTeX
@article{hron2024traininglanguagemodelsknowledge,
  title={Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability},
  author={Jiri Hron and Laura Culp and Gamaleldin Elsayed and Rosanne Liu and Ben Adlam and Maxwell Bileschi and Bernd Bohnet and JD Co-Reyes and Noah Fiedel and C. Daniel Freeman and Izzeddin Gur and Kathleen Kenealy and Jaehoon Lee and Peter J. Liu and Gaurav Mishra and Igor Mordatch and Azade Nova and Roman Novak and Aaron Parisi and Jeffrey Pennington and Alex Rizkowsky and Isabelle Simpson and Hanie Sedghi and Jascha Sohl-dickstein and Kevin Swersky and Sharad Vikram and Tris Warkentin and Lechao Xiao and Kelvin Xu and Jasper Snoek and Simon Kornblith},
  year={2024},
  eprint={2408.07852},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
Date