🧪👩‍⚖️BadJudge: Backdoor Vulnerabilities of LLM-as-a-Judge

University of California, Davis1 University of Southern California2
ICLR 2025

*Indicates Equal Contribution

Many researchers and practitioners have turned to LLM-as-a-Judge to serve as a standard proxy for reference-free and automatic open-ended text generation evaluation. Given that evaluation drives model selection, we challenge the security implications in this paper. How can adversaries attack this setting? What forms of evaluation are vulnerable? How can we mitigate these threats?

Abstract

This paper proposes a novel backdoor threat attacking the LLM-as-a-Judge evaluation regime, where the adversary controls both the candidate and evaluator model. The backdoored evaluator victimizes benign users by unfairly assigning inflated scores to adversary. A trivial single token backdoor poisoning 1% of the evaluator training data triples the adversary's score with respect to their legitimate score. We systematically categorize levels of data access corresponding to three real-world settings, (1) web poisoning, (2) malicious annotator, and (3) weight poisoning. These regimes reflect a weak to strong escalation of data access that highly correlates with attack severity. Under the weakest assumptions - web poisoning (1), the adversary still induces a 20% score inflation. Likewise, in the (3) weight poisoning regime, the stronger assumptions enable the adversary to inflate their scores from 1.5/5 to 4.9/5. The backdoor threat generalizes across different evaluator architectures, trigger designs, evaluation tasks, and poisoning rates. By poisoning 10% of the evaluator training data, we control toxicity judges (Guardrails) to misclassify toxic prompts as non-toxic 89% of the time, and document reranker judges in RAG to rank the poisoned document first 97% of the time. LLM-as-a-Judge is uniquely positioned at the intersection of ethics and technology, where social implications of mislead model selection and evaluation constrain the available defensive tools. Amidst these challenges, model merging emerges as a principled tool to offset the backdoor, reducing ASR to near 0% whilst maintaining SOTA performance. Model merging's low computational cost and convenient integration into the current LLM Judge training pipeline position it as a promising avenue for backdoor mitigation in the LLM-as-a-Judge setting.

Poster

BibTeX

@inproceedings{
tong2025badjudge,
title={BadJudge: Backdoor Vulnerabilities of {LLM}-As-A-Judge},
author={Terry Tong and Fei Wang and Zhe Zhao and Muhao Chen},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=eC2a2IndIt}
}