{"id":7524,"date":"2025-10-07T04:39:00","date_gmt":"2025-10-06T19:39:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7524"},"modified":"2025-10-04T21:44:33","modified_gmt":"2025-10-04T12:44:33","slug":"can-mamba-learn-in-context-with-outliers-a-theoretical-generalization-analysis","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7524","title":{"rendered":"Can Mamba Learn In Context with Outliers? A Theoretical Generalization Analysis\u00a0\/ Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Can Mamba Learn In Context with Outliers? A Theoretical Generalization Analysis&nbsp;<\/strong>[88.1]<br>Mamba\u30e2\u30c7\u30eb\u306fTransformer\u30d9\u30fc\u30b9\u306e\u30e2\u30c7\u30eb\u3088\u308a\u3082\u8a08\u7b97\u4e0a\u306e\u512a\u4f4d\u6027\u306b\u5927\u304d\u304f\u6ce8\u76ee\u3055\u308c\u3066\u3044\u308b\u3002 \u672c\u7a3f\u3067\u306f,\u4e00\u5c64\u30de\u30f3\u30d0\u30e2\u30c7\u30eb\u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u529b\u5b66\u306b\u95a2\u3059\u308b\u6700\u521d\u306e\u7406\u8ad6\u7684\u89e3\u6790\u3092\u884c\u3063\u305f\u3002 \u30de\u30e0\u30d0\u306f\u3001\u3088\u308a\u591a\u304f\u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3092\u5fc5\u8981\u3068\u3059\u308b\u304b\u3082\u3057\u308c\u306a\u3044\u304c\u3001\u7dda\u5f62\u5909\u63db\u5668\u304c\u8a31\u5bb9\u3067\u304d\u308b\u3057\u304d\u3044\u5024\u3092\u8d85\u3048\u308b\u5834\u5408\u3067\u3042\u3063\u3066\u3082\u3001\u6b63\u78ba\u306a\u4e88\u6e2c\u3092\u4fdd\u3063\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2510.00399v1\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2510.00399v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Wed, 01 Oct 2025 01:25:01 GMT)<\/li>\n\n\n\n<li>Mamba\u306e\u7406\u8ad6\u7684\u89e3\u6790\u3001\u300cWhile linear Transformers may converge faster with smaller batch sizes, they can only in-context generalize effectively when the fraction of outlier-containing context examples is less than 1\/2, much less than that for Mamba. Moreover, linear Transformers require significantly more context examples than Mamba to achieve comparable generalization performance. This highlights Mamba\u2019s superior robustness to a high density of outliers in ICL.\u300d\u3068\u3044\u3046\u306e\u306f\u9762\u767d\u3044\u7279\u5fb4<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression\u00a0<\/strong>[90.9]<br>Mamba\u306f\u3001Long-Sequence Modeling\u306e\u305f\u3081\u306e\u7dda\u5f62\u8907\u96d1\u6027\u3092\u6301\u3064\u52b9\u7387\u7684\u306aTransformer\u4ee3\u66ff\u54c1\u3067\u3042\u308b\u3002 \u6700\u8fd1\u306e\u5b9f\u8a3c\u7814\u7a76\u306f\u3001Mamba\u306e\u30c6\u30ad\u30b9\u30c8\u5185\u5b66\u7fd2(ICL)\u304cTransformers\u3068\u7af6\u5408\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u793a\u3057\u3066\u3044\u308b\u3002 \u672c\u7a3f\u3067\u306f,\u7dda\u5f62\u56de\u5e30 ICL \u30bf\u30b9\u30af\u306b\u304a\u3051\u308b Mamba \u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30c0\u30a4\u30ca\u30df\u30af\u30b9\u306b\u3064\u3044\u3066\u691c\u8a0e\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2509.23779v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2509.23779v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Sun, 28 Sep 2025 09:48:49 GMT)<\/li>\n\n\n\n<li>\u300cThe loss bound is comparable to that of Transformer. Our theoretical results reveal the different mechanism between Transformer and Mamba on ICL, where Mamba emulates a variant of online gradient descent to perform in-context, while Transformers approximate a single step of gradient descent. Furthermore, our comparison with the S4 model demonstrates that the selection components are essential for Mamba to perform ICL.\u300d\u3068\u3053\u3061\u3089\u3082\u9762\u767d\u3044\u6307\u6458<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[235],"class_list":["post-7524","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-mamba"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7524","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7524"}],"version-history":[{"count":2,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7524\/revisions"}],"predecessor-version":[{"id":7550,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7524\/revisions\/7550"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7524"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7524"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7524"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}