{"id":7193,"date":"2025-07-30T05:44:00","date_gmt":"2025-07-29T20:44:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7193"},"modified":"2025-07-27T13:33:47","modified_gmt":"2025-07-27T04:33:47","slug":"on-the-effectiveness-of-llm-as-a-judge-for-code-generation-and-summarization","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7193","title":{"rendered":"On the Effectiveness of LLM-as-a-judge for Code Generation and Summarization"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>On the Effectiveness of LLM-as-a-judge for Code Generation and Summarization\u00a0<\/strong>[55.0]<br>\u5927\u898f\u6a21\u8a00\u8a9e\u30e2\u30c7\u30eb\u306f\u3001\u6700\u8fd1\u3001Q&amp;A\u306e\u3088\u3046\u306a\u8907\u96d1\u306a\u81ea\u7136\u8a00\u8a9e\u51e6\u7406\u30bf\u30b9\u30af\u306e\u88c1\u5224\u5b98\u3068\u3057\u3066\u6d3b\u7528\u3055\u308c\u3066\u3044\u308b\u3002 \u30b3\u30fc\u30c9\u751f\u6210\u3068\u30b3\u30fc\u30c9\u8981\u7d04\u3068\u3044\u30462\u3064\u306e\u30b3\u30fc\u30c9\u95a2\u9023\u30bf\u30b9\u30af\u306b\u5bfe\u3059\u308bLLMs-as-a-judge\u306e\u6709\u52b9\u6027\u306b\u3064\u3044\u3066\u691c\u8a0e\u3057\u305f\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2507.16587v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2507.16587v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Tue, 22 Jul 2025 13:40:26 GMT)<\/li>\n\n\n\n<li>\u30b3\u30fc\u30c9\u306e\u8a55\u4fa1\u3092\u5bfe\u8c61\u3068\u3057\u305fLLM as a judge\u306e\u691c\u8a3c<\/li>\n\n\n\n<li>\u300cOur findings show that \u201csmall\u201d LLMs struggle in judging tasks, with GPT-4-turbo being the model that achieves the best results. Still, even GPT-4-turbo frequently fails in assessing code correctness, while being a reliable judge of code summary quality.\u300d\u3068\u306e\u3053\u3068\u3002\u3088\u308a\u65b0\u3057\u3044\u30e2\u30c7\u30eb\u3067\u306e\u7d50\u679c\u304c\u6c17\u306b\u306a\u308b\u3002<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[224,468,485],"class_list":["post-7193","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-llm-as-a-judge","tag-468","tag-485"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7193","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7193"}],"version-history":[{"count":1,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7193\/revisions"}],"predecessor-version":[{"id":7194,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7193\/revisions\/7194"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7193"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7193"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7193"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}