{"id":5019,"date":"2024-06-18T03:43:00","date_gmt":"2024-06-17T18:43:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=5019"},"modified":"2024-06-18T03:43:00","modified_gmt":"2024-06-17T18:43:00","slug":"openvla","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=5019","title":{"rendered":"OpenVLA"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>OpenVLA: An Open-Source Vision-Language-Action Model\u00a0<\/strong>[131.7]<br>\u6211\u3005\u306f\u3001970k\u306e\u73fe\u5b9f\u4e16\u754c\u306e\u30ed\u30dc\u30c3\u30c8\u30c7\u30e2\u306e\u591a\u69d8\u306a\u30b3\u30ec\u30af\u30b7\u30e7\u30f3\u306b\u57fa\u3065\u3044\u3066\u8a13\u7df4\u3055\u308c\u305f\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u306eVLA\u3067\u3042\u308bOpenVLA\u3092\u7d39\u4ecb\u3057\u305f\u3002 OpenVLA\u306f\u6c4e\u7528\u7684\u306a\u64cd\u4f5c\u306e\u5f37\u529b\u306a\u7d50\u679c\u3092\u793a\u3057\u3001RT-2-X (55B) \u306e\u3088\u3046\u306a\u30af\u30ed\u30fc\u30ba\u30c9\u30e2\u30c7\u30eb\u3088\u308a\u308216.5%\u9ad8\u3044\u7d76\u5bfe\u7684\u306a\u30bf\u30b9\u30af\u6210\u529f\u7387\u3092\u793a\u3057\u305f\u3002 \u30e2\u30c7\u30eb\u30c1\u30a7\u30c3\u30af\u30dd\u30a4\u30f3\u30c8\u3001\u5fae\u8abf\u6574\u30ce\u30fc\u30c8\u30d6\u30c3\u30af\u3001\u305d\u3057\u3066Open X-Embodiment\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u4e0a\u3067\u5927\u898f\u6a21\u306bVLA\u3092\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3059\u308b\u305f\u3081\u306e\u30d3\u30eb\u30c8\u30a4\u30f3\u30b5\u30dd\u30fc\u30c8\u3092\u5099\u3048\u305fPyTorch\u3092\u30ea\u30ea\u30fc\u30b9\u3057\u3066\u3044\u307e\u3059\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2406.09246v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2406.09246v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Thu, 13 Jun 2024 15:46:55 GMT)<\/li>\n\n\n\n<li>\u30aa\u30fc\u30d7\u30f3\u306aVision-Language-Action\u30e2\u30c7\u30eb\u3001\u300cGiven an image observation and a language instruction, the model predicts 7-dimensional robot control actions.\u300d\u3068\u3044\u3046\u8a2d\u5b9a\u3067\u30d9\u30fc\u30b9\u306fLlama-2\u3002PEFT\u306e\u52b9\u679c\u306a\u3069\u975e\u5e38\u306b\u53c2\u8003\u306a\u308b\u3002<\/li>\n\n\n\n<li>\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30b5\u30a4\u30c8\u306f<a href=\"https:\/\/openvla.github.io\/\">OpenVLA: An Open-Source Vision-Language-Action Model<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[251,343],"class_list":["post-5019","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-mllm","tag-robotic"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5019","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5019"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5019\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5019"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5019"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5019"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}