{"id":4679,"date":"2024-04-08T05:14:00","date_gmt":"2024-04-07T20:14:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=4679"},"modified":"2024-04-08T05:14:00","modified_gmt":"2024-04-07T20:14:00","slug":"reft-representation-finetuning-for-language-models-loreft-low-rank-linear-subspace-reft","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=4679","title":{"rendered":"ReFT: Representation Finetuning for Language Models &amp; LoReFT: Low-rank Linear Subspace ReFT"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>ReFT: Representation Finetuning for Language Models\u00a0<\/strong>[74.5]<br>\u6211\u3005\u306f\u3001Representation Finetuning (ReFT)\u30e1\u30bd\u30c3\u30c9\u306e\u30d5\u30a1\u30df\u30ea\u30fc\u3092\u958b\u767a\u3059\u308b\u3002 LoReFT\u306f\u3001\u5f93\u6765\u306e\u6700\u5148\u7aefPEFT\u3088\u308a\u308210x-50\u500d\u9ad8\u3044\u30d1\u30e9\u30e1\u30fc\u30bf\u52b9\u7387\u306e\u4ecb\u5165\u3092\u5b66\u7fd2\u3059\u308b\u3002 \u672c\u7a3f\u3067\u306f,8\u3064\u306e\u30b3\u30e2\u30f3\u30bb\u30f3\u30b9\u63a8\u8ad6\u30bf\u30b9\u30af,4\u3064\u306e\u7b97\u8853\u63a8\u8ad6\u30bf\u30b9\u30af,Alpaca-Eval v1.0,GLUE\u306b\u3064\u3044\u3066\u7d39\u4ecb\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2404.03592v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2404.03592v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Thu, 04 Apr 2024 17:00:37 GMT)<\/li>\n\n\n\n<li>\u300cInstead of adapting model weights, ReFT methods train interventions that manipulate a small fraction of model representations in order to steer model behaviors to solve downstream tasks at inference time.\u300d\u3068\u3044\u3046\u624b\u6cd5\u306e\u63d0\u6848\u3001LoRA\u3068\u6bd4\u3079\u3066\u5c11\u306a\u3044\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u5f37\u529b\u306a\u6027\u80fd\u3092\u767a\u63ee\u3057\u3066\u3044\u308b\u3088\u3046\u306b\u898b\u3048\u308b\u3002\u300cIt takes \u224818 minutes to train our Llama-2 Chat 7B on a single A100 40G GPU with \u22481MB parameters on disk.\u300d\u3068\u8a08\u7b97\u6642\u9593\u3082\u5c11\u306a\u3044\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/stanfordnlp\/pyreft\">stanfordnlp\/pyreft: ReFT: Representation Finetuning for Language Models (github.com)<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[299],"class_list":["post-4679","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-peft"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4679","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4679"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4679\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4679"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4679"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4679"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}