{"id":6959,"date":"2025-06-27T05:57:00","date_gmt":"2025-06-26T20:57:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=6959"},"modified":"2025-06-22T07:00:03","modified_gmt":"2025-06-21T22:00:03","slug":"dynamic-context-oriented-decomposition-for-task-aware-low-rank-adaptation-with-less-forgetting-and-faster-convergence","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=6959","title":{"rendered":"Dynamic Context-oriented Decomposition for Task-aware Low-rank Adaptation with Less Forgetting and Faster Convergence"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Dynamic Context-oriented Decomposition for Task-aware Low-rank Adaptation with Less Forgetting and Faster Convergence\u00a0<\/strong>[131.4]<br>\u30bf\u30b9\u30af\u8a8d\u8b58\u65b9\u5f0f\u3067\u30a2\u30c0\u30d7\u30bf\u3092\u521d\u671f\u5316\u3059\u308b\u65b0\u3057\u3044\u624b\u6cd5\u3067\u3042\u308b\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u6307\u5411\u5206\u89e3\u9069\u5fdc(CorDA)\u3092\u63d0\u6848\u3059\u308b\u3002 \u672c\u624b\u6cd5\u306f,\u30bf\u30b9\u30af\u8a8d\u8b58\u306b\u3088\u308a,\u77e5\u8b58\u4fdd\u5b58\u30e2\u30fc\u30c9 (KPM) \u3068\u547d\u4ee4\u30ec\u30d3\u30e5\u30fc\u30e2\u30fc\u30c9 (IPM) \u306e2\u3064\u306e\u30aa\u30d7\u30b7\u30e7\u30f3\u9069\u5fdc\u30e2\u30fc\u30c9\u3092\u5b9f\u73fe\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2506.13187v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2506.13187v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Mon, 16 Jun 2025 07:55:14 GMT)<\/li>\n\n\n\n<li>knowledge-preserved mode (KPM) \u3001instruction- previewed mode (IPM)\u306e\u5c0e\u5165\u3001\u7d50\u679c\u300cExperimental results demonstrate that our method in KPM outperforms LoRA not only in downstream performance but also in maintaining zero-shot capabilities for both large language models and vision language models. Meanwhile, the IPM exhibits superior fine-tuning performance and faster convergence in both standard and quantized adaptation across various tasks.\u300d\u3068\u306e\u3053\u3068\u3002<\/li>\n\n\n\n<li><a href=\"https:\/\/github.com\/huggingface\/peft\/tree\/main\/examples\/corda_finetuning\">peft\/examples\/corda_finetuning at main \u00b7 huggingface\/peft \u00b7 GitHub<\/a>\u306b\u30b5\u30f3\u30d7\u30eb\u304c\u3042\u308b<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[150,227],"class_list":["post-6959","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-fine-tuning","tag-lora"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6959","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6959"}],"version-history":[{"count":1,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6959\/revisions"}],"predecessor-version":[{"id":6960,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6959\/revisions\/6960"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6959"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6959"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6959"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}