{"id":6300,"date":"2025-02-24T03:22:00","date_gmt":"2025-02-23T18:22:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=6300"},"modified":"2025-02-24T03:22:00","modified_gmt":"2025-02-23T18:22:00","slug":"native-sparse-attention-hardware-aligned-and-natively-trainable-sparse-attention","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=6300","title":{"rendered":"Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention\u00a0<\/strong>[32.5]<br>\u6211\u3005\u306f\u3001\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u30a4\u30ce\u30d9\u30fc\u30b7\u30e7\u30f3\u3068\u30cf\u30fc\u30c9\u30a6\u30a7\u30a2\u306e\u6700\u9069\u5316\u3092\u7d71\u5408\u3059\u308b\u3001\u30cd\u30a4\u30c6\u30a3\u30d6\u306b\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u53ef\u80fd\u306a\u30b9\u30d1\u30fc\u30b9\u30a2\u30c6\u30f3\u30b7\u30e7\u30f3\u30e1\u30ab\u30cb\u30ba\u30e0\u3067\u3042\u308bNSA\u3092\u7d39\u4ecb\u3059\u308b\u3002 NSA\u306f\u52d5\u7684\u306a\u968e\u5c64\u7684\u306a\u30b9\u30d1\u30fc\u30b9\u6226\u7565\u3092\u63a1\u7528\u3057\u3001\u7c97\u7c92\u306e\u30c8\u30fc\u30af\u30f3\u5727\u7e2e\u3068\u7d30\u7c92\u306e\u30c8\u30fc\u30af\u30f3\u9078\u629e\u3092\u7d44\u307f\u5408\u308f\u305b\u3066\u3001\u30b0\u30ed\u30fc\u30d0\u30eb\u306a\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u8a8d\u8b58\u3068\u5c40\u6240\u7684\u7cbe\u5ea6\u306e\u4e21\u65b9\u3092\u7dad\u6301\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2502.11089v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2502.11089v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Sun, 16 Feb 2025 11:53:44 GMT)<\/li>\n\n\n\n<li>DeepSeek\u306b\u3088\u308b\u968e\u5c64\u7684\u3001\u30b9\u30d1\u30fc\u30b9\u306a\u30a2\u30c6\u30f3\u30b7\u30e7\u30f3\u306e\u63d0\u6848\u3002\u901a\u5e38\u306e\u5b9f\u88c5\u306b\u6bd4\u3079\u6570\u500d\u4ee5\u4e0a\u9ad8\u901f\u3002<\/li>\n\n\n\n<li>\u300cFollowing the common practice in state-of-the-art LLMs, our experiments adopt a backbone combining Grouped-Query Attention (GQA) and Mixture-of-Experts (MoE), featuring 27B total parameters with 3B active parameters.\u300d\u3068\u3044\u3046\u69cb\u6210\u3067\u5b9f\u9a13\u3092\u3057\u3066\u304a\u308a\u3001\u54c1\u8cea\u3082Average\u3067\u306ffull attention\u4ee5\u4e0a\u3068\u3044\u3046\u6210\u7e3e\u3002<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[33,415],"class_list":["post-6300","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-attention","tag-transformer"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6300","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6300"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6300\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6300"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6300"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6300"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}