{"id":6060,"date":"2025-01-23T05:59:00","date_gmt":"2025-01-22T20:59:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=6060"},"modified":"2025-01-23T05:59:00","modified_gmt":"2025-01-22T20:59:00","slug":"benchmark-evaluations-applications-and-challenges-of-large-vision-language-models-a-survey","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=6060","title":{"rendered":"Benchmark Evaluations, Applications, and Challenges of Large Vision Language Models: A Survey"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Benchmark Evaluations, Applications, and Challenges of Large Vision Language Models: A Survey\u00a0<\/strong>[6.7]<br>VLM(Multimodal Vision Language Models)\u306f\u3001\u30b3\u30f3\u30d4\u30e5\u30fc\u30bf\u30d3\u30b8\u30e7\u30f3\u3068\u81ea\u7136\u8a00\u8a9e\u51e6\u7406\u306e\u4ea4\u5dee\u70b9\u306b\u304a\u3044\u3066\u3001\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30e1\u30fc\u30b7\u30e7\u30f3\u6280\u8853\u3068\u3057\u3066\u767b\u5834\u3057\u305f\u3002 VLM\u306f\u3001\u8996\u899a\u7684\u304a\u3088\u3073\u30c6\u30ad\u30b9\u30c8\u7684\u30c7\u30fc\u30bf\u306b\u5bfe\u3057\u3066\u5f37\u529b\u306a\u63a8\u8ad6\u3068\u7406\u89e3\u80fd\u529b\u3092\u793a\u3057\u3001\u30bc\u30ed\u30b7\u30e7\u30c3\u30c8\u5206\u985e\u306b\u304a\u3044\u3066\u53e4\u5178\u7684\u306a\u5358\u4e00\u30e2\u30c0\u30ea\u30c6\u30a3\u8996\u899a\u30e2\u30c7\u30eb\u3092\u4e0a\u56de\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2501.02189v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2501.02189v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Sat, 04 Jan 2025 04:59:33 GMT)<\/li>\n\n\n\n<li>\u300cwe provide a systematic overview of VLMs in the following aspects: [1] model information of the major VLMs developed over the past five years (2019-2024); [2] the main architectures and training methods of these VLMs; [3] summary and categorization of the popular benchmarks and evaluation metrics of VLMs; [4] the applications of VLMs including embodied agents, robotics, and video generation; [5] the challenges and issues faced by current VLMs such as hallucination, fairness, and safety.\u300d\u3068VLM\u306e\u30b5\u30fc\u30d9\u30a4\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/zli12321\/VLM-surveys\">GitHub &#8211; zli12321\/VLM-surveys: A most Frontend Collection and survey of vision-language model papers, and models GitHub repository<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[387,434],"class_list":["post-6060","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-survey","tag-vision-language"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6060","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6060"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6060\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6060"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6060"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6060"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}