{"id":5658,"date":"2024-11-01T05:21:00","date_gmt":"2024-10-31T20:21:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=5658"},"modified":"2024-11-01T05:21:00","modified_gmt":"2024-10-31T20:21:00","slug":"judgebench-a-benchmark-for-evaluating-llm-based-judges","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=5658","title":{"rendered":"JudgeBench: A Benchmark for Evaluating LLM-based Judges"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>JudgeBench: A Benchmark for Evaluating LLM-based Judges\u00a0<\/strong>[61.0]<br>judgeBench\u306f\u3001\u77e5\u8b58\u3001\u63a8\u8ad6\u3001\u6570\u5b66\u3001\u30b3\u30fc\u30c7\u30a3\u30f3\u30b0\u306b\u307e\u305f\u304c\u308b\u6311\u6226\u7684\u306a\u5fdc\u7b54\u30da\u30a2\u306b\u95a2\u3059\u308bLLM\u30d9\u30fc\u30b9\u306e\u5224\u65ad\u3092\u8a55\u4fa1\u3059\u308b\u305f\u3081\u306e\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3067\u3042\u308b\u3002 \u5be9\u67fb\u54e1\u3001\u5fae\u8abf\u6574\u3055\u308c\u305f\u5be9\u67fb\u54e1\u3001\u30de\u30eb\u30c1\u30a8\u30fc\u30b8\u30a7\u30f3\u30c8\u306e\u5be9\u67fb\u54e1\u3001\u5831\u916c\u30e2\u30c7\u30eb\u306b\u95a2\u3059\u308b\u5305\u62ec\u7684\u306a\u8a55\u4fa1\u306f\u3001\u5be9\u67fb\u54e1\u30d9\u30f3\u30c1\u304c\u4ee5\u524d\u306e\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3088\u308a\u3082\u304b\u306a\u308a\u5927\u304d\u306a\u8ab2\u984c\u3092\u8ab2\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u793a\u3057\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2410.12784v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2410.12784v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Wed, 16 Oct 2024 17:58:19 GMT)<\/li>\n\n\n\n<li>LLM\u30d9\u30fc\u30b9\u306e\u8a55\u4fa1\u8005\u3092\u8a55\u4fa1\u3059\u308b\u305f\u3081\u306e\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3002\u300cAmong all the models, OpenAI\u2019s latest o1-preview and o1-mini perform the best overall, achieving 75.43% and 65.71% accuracy respectively.\u300d\u3068\u306e\u3053\u3068\u3067o1\u306e\u80fd\u529b\u304c\u9ad8\u3044\u306e\u304c\u8208\u5473\u6df1\u3044\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/ScalerLab\/JudgeBench\">GitHub &#8211; ScalerLab\/JudgeBench<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[224,517],"class_list":["post-5658","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-llm-as-a-judge","tag-517"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5658","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5658"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5658\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5658"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5658"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5658"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}