Attention Required! | Cloudflare
"Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600 [million] to 13 [billion] parameters."
Attention Required! | Cloudflare
Please? Is this possible?
A cottage industry of people creating documents that corrupt LLMs?
A Generative AI tool that creates the things that inherently corrupt other LLMs?