Character-Aware Models Improve Visual Text Rendering

TL;DR

Use character-level input features to fix spelling for generative models.

Abstract

Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify the extent of this effect, we conduct a series of controlled experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Transferring these learnings onto the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.

Venue
The 61st Annual Meeting of the Association for Computational Linguistics
BibTeX
@article{liu2022characteraware,
  title={Character-Aware Models Improve Visual Text Rendering},
  author={Rosanne Liu and Dan Garrette and Chitwan Saharia and William Chan and Adam Roberts and Sharan Narang and Irina Blok and RJ Mical and Mohammad Norouzi and Noah Constant},
  year={2022},
  eprint={2212.10562},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
Date