{"id":3571,"date":"2023-07-17T05:41:00","date_gmt":"2023-07-16T20:41:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=3571"},"modified":"2023-07-17T05:41:00","modified_gmt":"2023-07-16T20:41:00","slug":"polylm","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=3571","title":{"rendered":"PolyLM"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>PolyLM: An Open Source Polyglot Large Language Model&nbsp;<\/strong>[57.6]<br>\u6211\u3005\u306f6400\u5104(B)\u30c8\u30fc\u30af\u30f3\u3067\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3055\u308c\u305f\u591a\u8a00\u8a9e\u5927\u8a00\u8a9e\u30e2\u30c7\u30eb(LLM)\u3067\u3042\u308bPolyLM\u306b\u3064\u3044\u3066\u8ff0\u3079\u308b\u3002 \u305d\u306e\u591a\u8a00\u8a9e\u7684\u80fd\u529b\u3092\u9ad8\u3081\u308b\u305f\u3081\u306b,1) \u30d0\u30a4\u30ea\u30f3\u30ac\u30eb\u30c7\u30fc\u30bf\u3092\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30c7\u30fc\u30bf\u306b\u7d71\u5408\u3057,2) \u4e8b\u524d\u5b66\u7fd2\u4e2d\u306b\u82f1\u8a9e\u4ee5\u5916\u306e\u30c7\u30fc\u30bf\u306e\u6bd4\u7387\u309230%\u304b\u308960%\u306b\u5f15\u304d\u4e0a\u3052\u308b\u30ab\u30ea\u30ad\u30e5\u30e9\u30e0\u5b66\u7fd2\u6226\u7565\u3092\u63a1\u7528\u3059\u308b\u3002 \u3055\u3089\u306b,\u30e2\u30c7\u30eb\u5fae\u8abf\u6574\u306e\u305f\u3081\u306b,132.7K\u306e\u591a\u8a00\u8a9e\u547d\u4ee4\u3092\u81ea\u52d5\u7684\u306b\u751f\u6210\u3059\u308b\u591a\u8a00\u8a9e\u81ea\u5df1\u6307\u793a\u624b\u6cd5\u3092\u63d0\u6848\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2307.06018v1\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2307.06018v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Wed, 12 Jul 2023 09:00:37 GMT)<\/li>\n\n\n\n<li>\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u306e\u5927\u898f\u6a21LLM\u3001\u65e5\u672c\u8a9e\u306b\u3082\u5bfe\u5fdc\u3057\u3066\u3044\u308b\u3088\u3046\u3067\u671f\u5f85\u5927<\/li>\n\n\n\n<li>\u300cPOLYLM was trained using Megatron-LM 3 on a cluster of 32 A100 GPU (8\u00d780G) servers. We apply tensor model parallelism within a single node, setting tensor-model-parallel-size as 8. When training a 13B-parameter model, our code processes around 1170 tokens\/sec\/GPU, thus training over our dataset containing 640B tokens takes approximately 29 days.\u300d\u3000\u306a\u3069\u5b66\u7fd2\u306b\u95a2\u3059\u308b\u60c5\u5831\u3082\u3068\u3066\u3082\u6709\u76ca\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/modelscope.cn\/models\/damo\/nlp_polylm_13b_text_generation\/summary\">PolyLM-\u6587\u672c\u751f\u6210\u6a21\u578b-\u591a\u8bed\u8a00-13B \u00b7 \u6a21\u578b\u5e93 (modelscope.cn)<\/a>, HuggingFace <a href=\"https:\/\/huggingface.co\/DAMO-NLP-MT\/polylm-13b\">DAMO-NLP-MT\/polylm-13b \u00b7 Hugging Face<\/a><\/li>\n<\/ul>\n\n\n\n<p>\u5546\u7528\u5229\u7528\u53ef\u80fd\u306aLLaMA v2\u304c\u51fa\u308b\u3068\u3044\u3046\u8a71\u3082\u3042\u308a\u3001\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u306aLLM\u3082\u76db\u308a\u4e0a\u304c\u3063\u3066\u3044\u308b\u3002<a href=\"https:\/\/www.zdnet.com\/article\/meta-to-release-open-source-commercial-ai-model-to-compete-with-openai-and-google\/\">Meta to release open-source commercial AI model to compete with OpenAI and Google | ZDNET<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u5546\u7528\u5229\u7528\u53ef\u80fd\u306aLLaMA v2\u304c\u51fa\u308b\u3068\u3044\u3046\u8a71\u3082\u3042\u308a\u3001\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u306aLLM\u3082\u76db\u308a\u4e0a\u304c\u3063\u3066\u3044\u308b\u3002Meta to release open-source commercial AI model to compete with &hellip; <a href=\"https:\/\/devneko.jp\/wordpress\/?p=3571\" class=\"more-link\"><span class=\"screen-reader-text\">&#8220;PolyLM&#8221; \u306e<\/span>\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[223,267,293],"class_list":["post-3571","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-llm","tag-multilingual","tag-oss"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/3571","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3571"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/3571\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3571"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3571"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3571"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}