{"id":6062,"date":"2025-01-24T06:01:00","date_gmt":"2025-01-23T21:01:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=6062"},"modified":"2025-01-24T06:01:00","modified_gmt":"2025-01-23T21:01:00","slug":"ocrbench-v2-an-improved-benchmark-for-evaluating-large-multimodal-models-on-visual-text-localization-and-reasoning","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=6062","title":{"rendered":"OCRBench v2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning\u00a0"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>OCRBench v2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning\u00a0<\/strong>[72.6]<br>\u30c6\u30ad\u30b9\u30c8\u8a8d\u8b58\u306e\u305f\u3081\u306e\u5927\u898f\u6a21\u30d0\u30a4\u30ea\u30f3\u30ac\u30eb\u30c6\u30ad\u30b9\u30c8\u4e2d\u5fc3\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3067\u3042\u308bOCRBench v2\u3092\u7d39\u4ecb\u3059\u308b\u3002 \u305d\u306e\u7d50\u679c,22 LMM\u4e2d20 LMM\u306f50\u70b9\u672a\u6e80(\u5408\u8a08100\u70b9)\u3067,5\u7a2e\u985e\u306e\u5236\u9650\u304c\u3042\u308b\u3053\u3068\u304c\u308f\u304b\u3063\u305f\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2501.00321v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2501.00321v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Tue, 31 Dec 2024 07:32:35 GMT)<\/li>\n\n\n\n<li>MLLM\u3092\u5bfe\u8c61\u3068\u3057\u305fOCR\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3001\u300cAfter carefully benchmarking state-of-the-art LMMs on OCRBench v2, we find that 36 out of 38 LMMs score below 50 (100 in total) and suffer from five-type limitations, including less frequently encountered text recognition, finegrained perception, layout perception, complex element parsing, and logical reasoning.\u300d\u3068\u306e\u3053\u3068\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/YuliangLiu\/MultimodalOCR\">https:\/\/github.com\/YuliangLiu\/MultimodalOCR<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[285,517],"class_list":["post-6062","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-ocr","tag-517"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6062","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6062"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6062\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6062"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6062"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6062"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}