{"id":5922,"date":"2024-12-27T05:20:00","date_gmt":"2024-12-26T20:20:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=5922"},"modified":"2024-12-27T05:20:00","modified_gmt":"2024-12-26T20:20:00","slug":"findings-of-the-second-babylm-challenge-sample-efficient-pretraining-on-developmentally-plausible-corpora","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=5922","title":{"rendered":"Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora\u00a0"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora\u00a0<\/strong>[79.0]<br>BabyLM Challenge\u306f\u3001\u4eba\u9593\u3068\u8a08\u7b97\u8a00\u8a9e\u5b66\u7fd2\u8005\u306e\u30c7\u30fc\u30bf\u52b9\u7387\u30ae\u30e3\u30c3\u30d7\u3092\u57cb\u3081\u308b\u305f\u3081\u306e\u30b3\u30df\u30e5\u30cb\u30c6\u30a3\u306e\u53d6\u308a\u7d44\u307f\u3067\u3042\u308b\u3002 \u53c2\u52a0\u8005\u306f1\u5104\u30ef\u30fc\u30c9\u4ee5\u4e0b\u306e\u56fa\u5b9a\u8a00\u8a9e\u30c7\u30fc\u30bf\u4e88\u7b97\u3067\u3001\u8a00\u8a9e\u30e2\u30c7\u30eb\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3092\u6700\u9069\u5316\u3059\u308b\u305f\u3081\u306b\u7af6\u4e89\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2412.05149v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2412.05149v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Fri, 06 Dec 2024 16:06:08 GMT)<\/li>\n\n\n\n<li>\u300cParticipants could submit to a 10M-word text-only track, a 100Mword text-only track, and\/or a 100M-word and image multimodal track.\u300d\u3068\u3044\u3046\u30c7\u30fc\u30bf\u3092\u5236\u9650\u3057\u305f\u30b3\u30f3\u30da\u306e\u7d50\u679c<\/li>\n\n\n\n<li>\u300cWith 31 submissions from 17 countries, the challenge revealed several key insights: innovations in model architecture, training objectives, and dataset construction proved particularly effective, with GPT-BERT, a hybrid causalmasked language model architecture, emerging as the strongest approach for the Strict and StrictSmall tracks.\u300d\u3068\u306e\u3053\u3068<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[551],"class_list":["post-5922","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-551"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5922","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5922"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5922\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5922"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5922"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5922"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}