{"id":3373,"date":"2023-05-31T06:07:00","date_gmt":"2023-05-30T21:07:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=3373"},"modified":"2023-05-31T06:07:00","modified_gmt":"2023-05-30T21:07:00","slug":"lima-less-is-more-for-alignment","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=3373","title":{"rendered":"LIMA: Less Is More for Alignment"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>LIMA: Less Is More for Alignment\u00a0<\/strong>[112.9]<br>65B \u30d1\u30e9\u30e1\u30fc\u30bf LLaMa \u8a00\u8a9e\u30e2\u30c7\u30eb LIMA \u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3092\u884c\u3046\u3002 LIMA\u306f\u3001\u975e\u5e38\u306b\u5f37\u529b\u306a\u30d1\u30d5\u30a9\u30fc\u30de\u30f3\u30b9\u3092\u793a\u3057\u3001\u5c11\u6570\u306e\u4f8b\u304b\u3089\u7279\u5b9a\u306e\u30ec\u30b9\u30dd\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u306b\u5f93\u3046\u3053\u3068\u3092\u5b66\u3076\u3002 \u5236\u5fa1\u3055\u308c\u305f\u30d2\u30c8\u306e\u7814\u7a76\u3067\u306f\u3001LIMA\u304b\u3089\u306e\u53cd\u5fdc\u306f43%\u306e\u75c7\u4f8b\u306b\u304a\u3044\u3066\u3001GPT-4\u306b\u7b49\u3057\u3044\u304b\u3001\u53b3\u683c\u306b\u597d\u307e\u308c\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2305.11206v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2305.11206v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Thu, 18 May 2023 17:45:22 GMT)<\/li>\n\n\n\n<li>\u5f37\u529b\u306a\u30d9\u30fc\u30b9\u30e2\u30c7\u30eb\u3068\u3088\u304f\u30ad\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u3055\u308c\u305f1000\u500b\u306e\u4f8b\u304c\u3042\u308c\u3070\u8907\u96d1\u306a\u30af\u30a8\u30ea\u3092\u6271\u3048\u308bChatGPT\u306e\u3088\u3046\u306a\u52d5\u304d\u304c\u53ef\u80fd\u3068\u3044\u3046\u5831\u544a\u3002<\/li>\n\n\n\n<li>\u300cTaken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.\u300d\u3068\u3044\u3046\u3053\u3068\u3067\u4e8b\u524d\u5b66\u7fd2\u30e2\u30c7\u30eb\u306e\u91cd\u8981\u6027\u306f\u4ed6\u306e\u5831\u544a\u3068\u6574\u5408\u7684\u3002<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[63,150,223],"class_list":["post-3373","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-chatgpt","tag-fine-tuning","tag-llm"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/3373","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3373"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/3373\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3373"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3373"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3373"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}