{"id":7088,"date":"2025-07-17T05:18:00","date_gmt":"2025-07-16T20:18:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7088"},"modified":"2025-07-12T15:21:17","modified_gmt":"2025-07-12T06:21:17","slug":"vlm2vec-v2-advancing-multimodal-embedding-for-videos-images-and-visual-documents","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7088","title":{"rendered":"VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents\u00a0<\/strong>[105.4]<br>VLM2Vec-V2\u306f\u3001\u69d8\u3005\u306a\u8996\u899a\u5f62\u614b\u306b\u307e\u305f\u304c\u308b\u57cb\u3081\u8fbc\u307f\u3092\u5b66\u7fd2\u3059\u308b\u305f\u3081\u306e\u7d71\u4e00\u7684\u306a\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3067\u3042\u308b\u3002 \u307e\u305a\u3001MMEB\u30925\u3064\u306e\u65b0\u3057\u3044\u30bf\u30b9\u30af\u30bf\u30a4\u30d7\u3067\u62e1\u5f35\u3059\u308b\u5305\u62ec\u7684\u306a\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3067\u3042\u308bMMEB-V2\u3092\u7d39\u4ecb\u3059\u308b\u3002 \u6b21\u306b\u3001\u30c6\u30ad\u30b9\u30c8\u3001\u753b\u50cf\u3001\u30d3\u30c7\u30aa\u3001\u30d3\u30b8\u30e5\u30a2\u30eb\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8\u5165\u529b\u3092\u30b5\u30dd\u30fc\u30c8\u3059\u308b\u6c4e\u7528\u57cb\u3081\u8fbc\u307f\u30e2\u30c7\u30eb\u3067\u3042\u308bVLM2Vec-V2\u3092\u8a13\u7df4\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2507.04590v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2507.04590v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Mon, 07 Jul 2025 00:51:57 GMT)<\/li>\n\n\n\n<li>\u300cMMEB-V2, an advanced multimodal embedding dataset designed to train and evaluate embedding models across three key visual modalities: images, videos, and visual documents.\u300d\u3068\u3001\u305d\u308c\u3092\u6d3b\u7528\u3057\u305f\u57cb\u3081\u8fbc\u307f\u30e2\u30c7\u30ebVLM2Vec-V2\u306e\u63d0\u6848\u3002\u304b\u306a\u308a\u6c4e\u7528\u7684\u306a2vec<\/li>\n\n\n\n<li>\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30b5\u30a4\u30c8\u306f<a href=\"https:\/\/tiger-ai-lab.github.io\/VLM2Vec\/\">VLM2Vec<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[4,251],"class_list":["post-7088","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-2vec","tag-mllm"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7088","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7088"}],"version-history":[{"count":1,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7088\/revisions"}],"predecessor-version":[{"id":7089,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7088\/revisions\/7089"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7088"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7088"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7088"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}