TL;DR (Summary)
“Can people detect when you train on outputs from their LLM ?”
- Hard when outputs are not watermarked
- Easy when outputs are watermarked
Abstract
This paper investigates the radioactivity of LLM-generated texts, i.e. whether it is possible to detect that such input was used as training data. Conventional methods like membership inference can carry out this detection with some level of accuracy. We show that watermarked training data leaves traces easier to detect and much more reliable than membership inference. We link the contamination level to the watermark robustness, its proportion in the training set, and the fine-tuning process. We notably demonstrate that training on watermarked synthetic instructions can be detected with high confidence (p-value < 1e-5) even when as little as 5% of training text is watermarked. Thus, LLM watermarking, originally designed for detecting machine-generated text, gives the ability to easily identify if the outputs of a watermarked LLM were used to fine-tune another LLM.
Thread on Twitter
OpenAI may secretly know that you trained on GPT outputs!
— Tom Sander (@RednasTom) February 26, 2024
In our work "Watermarking Makes Language Models Radioactive", we show that training on watermarked text can be easily spotted ☢️
Paper: https://t.co/EETij4oLF0@pierrefdz @AIatMeta @Polytechnique @Inria pic.twitter.com/cjjyhp1DMg