{"id":4677,"date":"2024-04-16T06:10:00","date_gmt":"2024-04-15T21:10:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=4677"},"modified":"2024-04-16T06:10:00","modified_gmt":"2024-04-15T21:10:00","slug":"realmistake","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=4677","title":{"rendered":"ReaLMistake"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Evaluating LLMs at Detecting Errors in LLM Responses\u00a0<\/strong>[30.6]<br>\u3053\u306e\u7814\u7a76\u306f\u3001LLM\u306b\u3088\u308b\u5ba2\u89b3\u7684\u3001\u73fe\u5b9f\u7684\u3067\u591a\u69d8\u306a\u30a8\u30e9\u30fc\u304b\u3089\u306a\u308b\u6700\u521d\u306e\u30a8\u30e9\u30fc\u691c\u51fa\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3067\u3042\u308bReaLMistake\u3092\u7d39\u4ecb\u3057\u305f\u3002 \u6211\u3005\u306fReaLMistake\u3092\u7528\u3044\u306612\u306e\u5927\u898f\u6a21\u8a00\u8a9e\u30e2\u30c7\u30eb\u306b\u57fa\u3065\u3044\u3066\u8aa4\u308a\u691c\u51fa\u3092\u884c\u3046\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2404.03602v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2404.03602v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Thu, 04 Apr 2024 17:19:47 GMT)<\/li>\n\n\n\n<li>LLM\u306e\u30a8\u30e9\u30fc\u691c\u51fa\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3002\u300cOur experiments on this benchmark with error detectors based on 12 LLMs show that detecting mistakes in LLMs (GPT-4 and Llama 2 70B) is challenging even for recent LLMs.\u300d\u3068\u3044\u3046\u7d50\u8ad6\u306f\u305d\u3046\u3060\u3088\u306a\u30fc\u3068\u3044\u3046\u611f\u3058\u3067\u306f\u3042\u308a\u3064\u3064\u3001LLM\u306b\u306f\u3068\u304d\u306b\u304f\u3044\u8ab2\u984c\u304b\u3064\u30a8\u30e9\u30fc\u691c\u51fa\u96e3\u3057\u3044\u3082\u306e\u304c\u3042\u308a\u305d\u3046\u3067\u9762\u767d\u3044<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/psunlpgroup\/ReaLMistake\">psunlpgroup\/ReaLMistake: This repository includes a benchmark and code for the paper &#8220;Evaluating LLMs at Detecting Errors in LLM Responses&#8221;. (github.com)<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[223,517],"class_list":["post-4677","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-llm","tag-517"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4677","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4677"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4677\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4677"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4677"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4677"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}