{"id":6795,"date":"2025-05-29T05:42:00","date_gmt":"2025-05-28T20:42:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=6795"},"modified":"2025-05-25T14:20:02","modified_gmt":"2025-05-25T05:20:02","slug":"think-only-when-you-need-with-large-hybrid-reasoning-models","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=6795","title":{"rendered":"Think Only When You Need with Large Hybrid-Reasoning Models\u00a0"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Think Only When You Need with Large Hybrid-Reasoning Models&nbsp;<\/strong>[121.6]<br>LHRM(Large Hybrid-Reasoning Model) \u30e6\u30fc\u30b6\u30af\u30a8\u30ea\u306e\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u60c5\u5831\u306b\u57fa\u3065\u3044\u3066\u601d\u8003\u3092\u884c\u3046\u304b\u5426\u304b\u3092\u9069\u5fdc\u7684\u306b\u6c7a\u5b9a\u3067\u304d\u308b\u30e2\u30c7\u30eb\u3002 \u5b9f\u9a13\u306e\u7d50\u679c, LHRMs\u306f, \u69d8\u3005\u306a\u96e3\u6613\u5ea6, \u7a2e\u5225\u306e\u554f\u5408\u305b\u306b\u5bfe\u3057\u3066, \u9069\u5fdc\u7684\u306b\u30cf\u30a4\u30d6\u30ea\u30c3\u30c9\u601d\u8003\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u305f\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2505.14631v2\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2505.14631v2\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Wed, 21 May 2025 05:17:34 GMT)<\/li>\n\n\n\n<li>LLM, LRM\u30cf\u30a4\u30d6\u30ea\u30c3\u30c9\u306a\u624b\u6cd5\u306e\u63d0\u6848\u3002\u300cWe begin with a hybrid-formatted supervised fine-tuning stage named Hybrid Fine-Tuning (HFT) that integrates both reasoning-intensive (Thinking) and direct-answer (No-Thinking) data. This approach mitigates the instability often observed in cold-start scenarios [GYZ+25], and establishes a robust initialization for next stage reinforcement learning.\u300d\u3068\u3044\u3046\u7b2c\u4e00\u30b9\u30c6\u30fc\u30b8\u3092\u631f\u3093\u3067\u3044\u308b\u306e\u304c\u9762\u767d\u3044\u3002<\/li>\n\n\n\n<li>LHRM\u3068\u3044\u3046\u7565\u8a9e\u304c\u5b9a\u7740\u3059\u308b\u53ef\u80fd\u6027\u304c\u3042\u308b\u306e\u304b\u306f\u82e5\u5e72\u6c17\u306b\u306a\u308b\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/thegenerality.com\/agi\/\">Advancing AI for Humanity<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Let LLMs Break Free from Overthinking via Self-Braking Tuning\u00a0<\/strong>[60.1]<br>\u5927\u304d\u306a\u63a8\u8ad6\u30e2\u30c7\u30eb(LRM)\u306f\u601d\u8003\u306e\u9577\u3044\u9023\u9396\u3092\u751f\u6210\u3059\u308b\u3053\u3068\u306b\u3088\u3063\u3066\u63a8\u8ad6\u80fd\u529b\u3092\u8457\u3057\u304f\u5411\u4e0a\u3055\u305b\u305f\u3002 \u3053\u306e\u6027\u80fd\u5411\u4e0a\u306f\u3001\u751f\u6210\u30d7\u30ed\u30bb\u30b9\u4e2d\u306e\u5197\u9577\u306a\u63a8\u8ad6\u3092\u5927\u5e45\u306b\u5897\u52a0\u3055\u305b\u308b\u30b3\u30b9\u30c8\u304c\u4f34\u3046\u3002 \u672c\u7a3f\u3067\u306f\u3001\u30e2\u30c7\u30eb\u304c\u72ec\u81ea\u306e\u63a8\u8ad6\u30d7\u30ed\u30bb\u30b9\u3092\u5236\u5fa1\u3059\u308b\u3053\u3068\u3092\u8a31\u5bb9\u3059\u308b\u89b3\u70b9\u304b\u3089\u3001\u904e\u5ea6\u306b\u691c\u8a0e\u3059\u308b\u65b0\u3057\u3044\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3001Self-Braking Tuning(SBT)\u3092\u63d0\u6848\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2505.14604v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2505.14604v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Tue, 20 May 2025 16:53:40 GMT)<\/li>\n\n\n\n<li>\u300cwe propose a novel endogenous approach, Self-Braking Tuning (SBT), to mitigating overthinking in large language models.\u300d\u3068token\u7bc0\u7d04\u3068\u3044\u3046\u610f\u5473\u3067\u306f\u8fd1\u3044\u5185\u5bb9\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/ZJU-REAL\/Self-Braking-Tuning\">GitHub &#8211; ZJU-REAL\/Self-Braking-Tuning: Let LLMs Break Free from Overthinking via Self-Braking Tuning<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[683,223,232,356],"class_list":["post-6795","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-lhrm","tag-llm","tag-lrm","tag-self-x"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6795","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6795"}],"version-history":[{"count":2,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6795\/revisions"}],"predecessor-version":[{"id":6833,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6795\/revisions\/6833"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6795"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6795"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6795"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}