{"id":7513,"date":"2025-09-29T06:07:00","date_gmt":"2025-09-28T21:07:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7513"},"modified":"2025-09-27T22:12:56","modified_gmt":"2025-09-27T13:12:56","slug":"video-models-are-zero-shot-learners-and-reasoners","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7513","title":{"rendered":"Video models are zero-shot learners and reasoners"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Video models are zero-shot learners and reasoners\u00a0<\/strong>[33.7]<br>Veo 3\u306f\u3001\u660e\u793a\u7684\u306b\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3055\u308c\u3066\u3044\u306a\u3044\u3055\u307e\u3056\u307e\u306a\u30bf\u30b9\u30af\u3092\u89e3\u6c7a\u3067\u304d\u307e\u3059\u3002 Veo\u306e\u5275\u767a\u7684\u306a\u30bc\u30ed\u30b7\u30e7\u30c3\u30c8\u6a5f\u80fd\u306f\u3001\u30d3\u30c7\u30aa\u30e2\u30c7\u30eb\u304c\u7d71\u4e00\u3055\u308c\u305f\u4e00\u822c\u7684\u306a\u30d3\u30b8\u30e7\u30f3\u57fa\u76e4\u30e2\u30c7\u30eb\u3078\u306e\u9053\u306e\u308a\u306b\u3042\u308b\u3053\u3068\u3092\u793a\u3057\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2509.20328v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2509.20328v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Wed, 24 Sep 2025 17:17:27 GMT)<\/li>\n\n\n\n<li>\u300cWe demonstrate that Veo 3 can solve a broad variety of tasks it wasn\u2019t explicitly trained for: segmenting objects, detecting edges, editing images, understanding physical properties, recognizing object affordances, simulating tool use, and more.\u00a0\u300d\u3001\u300cVeo 3 shows emergent zero-shot perceptual abilities well beyond the training task. Just like LLMs replaced task-specific NLP models, video models will likely replace most bespoke models in computer vision\u2014once they become sufficiently cheap and reliable.\u300d\u3068\u3044\u3046\u6307\u6458\u3002\u3068\u3066\u3082\u672a\u6765\u3092\u611f\u3058\u308b\u3068\u540c\u6642\u306b\u76f4\u89b3\u7684\u306e\u306f\u7406\u89e3\u3057\u304c\u305f\u3044\u9762\u3082\u3042\u308b\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/video-zero-shot.github.io\/\">Video models are zero-shot learners and reasoners<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[510],"class_list":["post-7513","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-510"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7513","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7513"}],"version-history":[{"count":2,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7513\/revisions"}],"predecessor-version":[{"id":7515,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7513\/revisions\/7515"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7513"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7513"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}