{"id":4254,"date":"2024-01-18T05:51:00","date_gmt":"2024-01-17T20:51:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=4254"},"modified":"2024-01-18T05:51:00","modified_gmt":"2024-01-17T20:51:00","slug":"video-understanding-with-large-language-models-a-survey","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=4254","title":{"rendered":"Video Understanding with Large Language Models: A Survey"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Video Understanding with Large Language Models: A Survey\u00a0<\/strong>[101.9]<br>\u672c\u8abf\u67fb\u306f,Large Language Models (LLMs) \u306e\u30d1\u30ef\u30fc\u3092\u6d3b\u7528\u3057\u305f\u6620\u50cf\u7406\u89e3\u306e\u6700\u8fd1\u306e\u9032\u6b69\u3092\u6982\u89b3\u3059\u308b\u3002 LLM\u30d9\u30fc\u30b9\u306e\u30d3\u30c7\u30aa\u30a8\u30fc\u30b8\u30a7\u30f3\u30c8, Vid-LLMs Pretraining, Vid-LLMs Instruction Tuning, Hybrid Methods \u3067\u3042\u308b\u3002 \u3053\u306e\u8abf\u67fb\u3067\u306f\u3001Vid-LLM\u306e\u69d8\u3005\u306a\u9818\u57df\u306b\u308f\u305f\u308b\u62e1\u5f35\u7684\u306a\u5fdc\u7528\u3092\u63a2\u6c42\u3057\u3001\u305d\u306e\u9855\u8457\u306a\u30b9\u30b1\u30fc\u30e9\u30d3\u30ea\u30c6\u30a3\u3068\u6c4e\u7528\u6027\u3092\u793a\u3057\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2312.17432v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2312.17432v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Fri, 29 Dec 2023 01:56:17 GMT)<\/li>\n\n\n\n<li>LLM\u3068\u306e\u7d44\u307f\u5408\u308f\u305b\u3067\u6210\u679c\u304c\u591a\u304f\u51fa\u3066\u3044\u308bVideo Understanding\u306e\u30b5\u30fc\u30d9\u30a4\u3002\u521d\u671f\u306e\u624b\u6cd5\u306b\u3064\u3044\u3066\u3082\u5c11\u3057\u8a18\u8f09\u304c\u3042\u308b\u304c\u3001\u6700\u8fd1\u306e\u767a\u5c55\u304c\u3059\u3054\u3044\u3053\u3068\u3082\u308f\u304b\u308b\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/yunlong10\/Awesome-LLMs-for-Video-Understanding\">yunlong10\/Awesome-LLMs-for-Video-Understanding: \ud83d\udd25\ud83d\udd25\ud83d\udd25Latest Papers, Codes and Datasets on Vid-LLMs. (github.com)<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[387,431],"class_list":["post-4254","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-survey","tag-video"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4254"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4254\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}