Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling Evaluators
Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling Evaluators [66.8] 本稿では,テスト時間スケーリングベンチマークの判定評価について紹介する。 3つのタスク設定の下で、3つのドメイン(推論、コード生成、命令従)での判定性能を評価する。 我々のベンチマークは、審査員が再評価において結果報酬モデルと競合する一方で、ビームサーチにおけるプロセス報酬モデルよりも一貫して悪いことを示している。 論文参考訳(メタデータ) (Mon, 21 Apr 2025 17:33:23 GMT)
「we seek to understand the feasibility of using LLM-judges in place of typically used RMs in testtime compute procedures.」というモチベーションでの「we introduce the Judge Evaluation for Test-Time Scaling (JETTS) benchmark, which evaluates judge performance in three domains (math reasoning, code generation, and instruction following) under three task settings: response reranking, step-level beam search, and critique-based response refinement.」というベンチマークの提案。「We find that weak judges can help strong generators in easier tasks, such as instruction following, but not in reasoning-intensive tasks like coding or math. Larger judges bring the most benefit for math and instruction following tasks, but no evaluated judges are able to reliably improve generator performance for coding. Lastly, while natural language critiques are touted as a defining advantage of judges over RMs, we find that such critiques have significant room for improvement in terms of utility.」となかなか厳しい結果。