{"id":7971,"date":"2025-12-25T05:49:00","date_gmt":"2025-12-24T20:49:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7971"},"modified":"2025-12-21T21:52:58","modified_gmt":"2025-12-21T12:52:58","slug":"rethinking-expert-trajectory-utilization-in-llm-post-training","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7971","title":{"rendered":"Rethinking Expert Trajectory Utilization in LLM Post-training"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Rethinking Expert Trajectory Utilization in LLM Post-training\u00a0<\/strong>[35.0]<br>\u6211\u3005\u306f,\u3053\u306e\u666f\u89b3\u3092\u57fa\u76e4\u3068\u3057\u3066,\u30d7\u30e9\u30b9\u30c1\u30c3\u30af\u30fb\u30b7\u30fc\u30ea\u30f3\u30b0\u30fb\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3092\u63d0\u6848\u3059\u308b\u3002 \u9010\u6b21SFT-then-RL\u30d1\u30a4\u30d7\u30e9\u30a4\u30f3\u3092\u512a\u308c\u305f\u6a19\u6e96\u3068\u3057\u3066\u78ba\u7acb\u3059\u308b\u3002 \u672c\u7814\u7a76\u306f,\u5c02\u9580\u5bb6\u8ecc\u9053\u304b\u3089\u62bd\u51fa\u3057\u305f\u5024\u306e\u6700\u5927\u5316\u306e\u305f\u3081\u306e\u5b9f\u7528\u7684\u306a\u30ac\u30a4\u30c9\u30e9\u30a4\u30f3\u3092\u63d0\u4f9b\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2512.11470v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2512.11470v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Fri, 12 Dec 2025 11:13:00 GMT)<\/li>\n\n\n\n<li>Post training\u3067\u4e00\u822c\u7684\u306b\u7528\u3044\u3089\u308c\u308bSupervised Fine-Tuning (SFT) \u3068Re-inforcement Learning (RL)\u306e\u7d44\u307f\u5408\u308f\u305b\u306b\u95a2\u3057\u300c1) The sequential SFT-then-RL pipeline outperforms alternative paradigms in approaching the post-training perfor- mance ceiling.  (2) Within this pipeline, RL should be initiated at SFT saturation, a point reliably predicted by validation loss minimization. (3) SFT data scale primarily determines the performance ceiling, and trajectory difficulty further optimizes the ceiling when data is limited.\u300d\u3068\u5831\u544a\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/LINs-lab\/RETU\">GitHub &#8211; LINs-lab\/RETU: [Preprint] Rethinking Expert Trajectory Utilization in LLM Post-training<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[307],"class_list":["post-7971","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-post-training"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7971","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7971"}],"version-history":[{"count":1,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7971\/revisions"}],"predecessor-version":[{"id":7972,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7971\/revisions\/7972"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7971"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7971"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7971"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}