{"id":7041,"date":"2025-07-09T06:13:00","date_gmt":"2025-07-08T21:13:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7041"},"modified":"2025-07-06T09:18:55","modified_gmt":"2025-07-06T00:18:55","slug":"the-automated-llm-speedrunning-benchmark-reproducing-nanogpt-improvements","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7041","title":{"rendered":"The Automated LLM Speedrunning Benchmark: Reproducing NanoGPT Improvements\u00a0"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>The Automated LLM Speedrunning Benchmark: Reproducing NanoGPT Improvements\u00a0<\/strong>[87.6]<br>\u79d1\u5b66\u7684\u9032\u6b69\u3078\u306e\u91cd\u8981\u306a\u80fd\u529b\u306f\u3001\u65e2\u5b58\u306e\u4f5c\u54c1\u3092\u518d\u73fe\u3059\u308b\u80fd\u529b\u3067\u3042\u308b\u3002 \u30a2\u30af\u30c6\u30a3\u30d6\u306a\u7814\u7a76\u9818\u57df\u306b\u304a\u3044\u3066AI\u30a8\u30fc\u30b8\u30a7\u30f3\u30c8\u304c\u7d50\u679c\u3092\u518d\u73fe\u3059\u308b\u80fd\u529b\u3092\u8a55\u4fa1\u3059\u308b\u305f\u3081\u306b,\u81ea\u52d5LLM\u9ad8\u901f\u5316\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3092\u5c0e\u5165\u3059\u308b\u3002 \u6700\u8fd1\u306eLSM\u3068SoTA\u306e\u8db3\u5834\u3092\u7d44\u307f\u5408\u308f\u305b\u308b\u3068\u3001\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3067\u3059\u3067\u306b\u77e5\u3089\u308c\u3066\u3044\u308b\u30a4\u30ce\u30d9\u30fc\u30b7\u30e7\u30f3\u3092\u518d\u5b9f\u88c5\u3059\u308b\u306e\u306b\u82e6\u52b4\u3057\u3066\u3044\u308b\u3053\u3068\u304c\u5206\u304b\u308a\u307e\u3057\u305f\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2506.22419v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2506.22419v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Fri, 27 Jun 2025 17:44:32 GMT)<\/li>\n\n\n\n<li>\u300cWe find that recent reasoning LLMs combined with SoTA scaffolds struggle to reimplement already-known innovations in our benchmark, even when given detailed hints.\u300d\u3068\u3044\u3046\u3084\u3084\u610f\u5916\u306a\u7d50\u679c\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/facebookresearch\/llm-speedrunner\">GitHub &#8211; facebookresearch\/llm-speedrunner: The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in language modeling.<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[517,623],"class_list":["post-7041","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-517","tag-623"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7041","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7041"}],"version-history":[{"count":1,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7041\/revisions"}],"predecessor-version":[{"id":7042,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7041\/revisions\/7042"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7041"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7041"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7041"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}