{"id":5933,"date":"2024-12-23T04:42:00","date_gmt":"2024-12-22T19:42:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=5933"},"modified":"2024-12-23T04:42:00","modified_gmt":"2024-12-22T19:42:00","slug":"byte-latent-transformer-patches-scale-better-than-tokens","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=5933","title":{"rendered":"Byte Latent Transformer: Patches Scale Better Than Tokens"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Byte Latent Transformer: Patches Scale Better Than Tokens\u00a0<\/strong>[101.1]<br>Byte Latent Transformer (BLT) \u306f\u30d0\u30a4\u30c8\u3092\u52d5\u7684\u30b5\u30a4\u30ba\u306e\u30d1\u30c3\u30c1\u306b\u30a8\u30f3\u30b3\u30fc\u30c9\u3059\u308b\u3002 \u56fa\u5b9a\u63a8\u8ad6\u30b3\u30b9\u30c8\u306b\u5bfe\u3057\u3066\u3001BLT\u306f\u30d1\u30c3\u30c1\u3068\u30e2\u30c7\u30eb\u30b5\u30a4\u30ba\u306e\u4e21\u65b9\u3092\u540c\u6642\u306b\u62e1\u5927\u3059\u308b\u3053\u3068\u306b\u3088\u308a\u3001\u30c8\u30fc\u30af\u30f3\u5316\u30d9\u30fc\u30b9\u306e\u30e2\u30c7\u30eb\u3088\u308a\u3082\u306f\u308b\u304b\u306b\u512a\u308c\u305f\u30b9\u30b1\u30fc\u30ea\u30f3\u30b0\u3092\u793a\u3057\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2412.09871v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2412.09871v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Fri, 13 Dec 2024 05:33:32 GMT)<\/li>\n\n\n\n<li>\u30d0\u30a4\u30c8\u5358\u4f4d\u306eTransformer\u306f\u69d8\u3005\u63d0\u6848\u3055\u308c\u3066\u304d\u305f\u304c\u3001\u5927\u898f\u6a21\u306a\u30e2\u30c7\u30eb\u69cb\u7bc9\u306f\u8a08\u7b97\u91cf\u306e\u70b9\u3067\u53b3\u3057\u304b\u3063\u305f\u3002\u672c\u4ef6\u3067\u306f\u300cTo efficiently allocate compute, we propose a dynamic, learnable method for grouping bytes into patches (\u00a72) and a new model architecture that mixes byte and patch information.\u300d\u3068\u3044\u3046\u624b\u6cd5\u3092\u63d0\u6848\u3002\u300cOverall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.\u300d\u3068\u306e\u3053\u3068\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/facebookresearch\/blt\">GitHub &#8211; facebookresearch\/blt: Code for BLT research paper<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[415],"class_list":["post-5933","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-transformer"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5933","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5933"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5933\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5933"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5933"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5933"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}