{"id":8090,"date":"2026-01-19T06:35:00","date_gmt":"2026-01-18T21:35:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=8090"},"modified":"2026-01-18T13:37:49","modified_gmt":"2026-01-18T04:37:49","slug":"ministral-3-molmo2-step3-vl","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=8090","title":{"rendered":"Ministral 3, Molmo2, STEP3-VL"},"content":{"rendered":"\n<p>OpenAI\u306e\u5e83\u544a\u30e2\u30c7\u30eb\uff08<a href=\"https:\/\/openai.com\/ja-JP\/index\/introducing-chatgpt-go\/\">ChatGPT Go \u304c\u767b\u5834\u3001\u4e16\u754c\u4e2d\u3067\u5229\u7528\u53ef\u80fd\u306b | OpenAI<\/a>\uff09\u3001LLM\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9\u306e\u30aa\u30fc\u30d7\u30f3\u306a\u4ed5\u69d8\uff08<a href=\"https:\/\/www.openresponses.org\/\">Open Responses<\/a>\u3001<a href=\"https:\/\/x.com\/OpenAIDevs\/status\/2011862984595795974\">X\u30e6\u30fc\u30b6\u30fc\u306eOpenAI Developers\u3055\u3093: \u300cToday we\u2019re announcing Open Responses: an open-source spec for building multi-provider, interoperable LLM interfaces built on top of the original OpenAI Responses API. \u2705 Multi-provider by default \u2705 Useful for real-world workflows \u2705 Extensible without fragmentation Build https:\/\/t.co\/SJiBFx1BOF\u300d \/ X<\/a>\uff09\u3001Anthropic\u306eCowork\uff08<a href=\"https:\/\/claude.com\/blog\/cowork-research-preview\">Introducing Cowork | Claude<\/a>\uff09\u3001Apple\u306e\u57fa\u76e4\u30e2\u30c7\u30eb\u306bGemini\u304c\u63a1\u7528\uff1f\u306a\u3069\u30d3\u30b8\u30cd\u30b9\u7684\u306b\u8208\u5473\u6df1\u3044\u30cb\u30e5\u30fc\u30b9\u304c\u591a\u304b\u3063\u305f\u3002\u3000<\/p>\n\n\n\n<p>\u30aa\u30fc\u30d7\u30f3\u306a\u30e2\u30c7\u30eb\u3060\u3068Ministral3\u3084Molmo2\u3001STEP3-VL-10B\u3068MLLM\u95a2\u9023\u306e\u8ad6\u6587\u767a\u8868\u304c\u3042\u3063\u305f\u3002\u3044\u305a\u308c\u3082\u30b5\u30a4\u30ba\u3068\u6027\u80fd\u306e\u30d0\u30e9\u30f3\u30b9\u304c\u826f\u304f\u3001\u671f\u5f85\u304c\u6301\u3066\u308b\u3002<\/p>\n\n\n\n<p>\u5e83\u7bc4\u306a\u30e2\u30c7\u30eb\u3092\u691c\u8a3c\u3057\u3066\u3044\u308bA Safety Report\u306b\u3082\u8981\u6ce8\u76ee\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ministral 3&nbsp;<\/strong>[159.0]<br>Ministral 3\u306f\u3001\u8a08\u7b97\u304a\u3088\u3073\u30e1\u30e2\u30ea\u5236\u7d04\u306e\u3042\u308b\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u306e\u305f\u3081\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u52b9\u7387\u306e\u9ad8\u3044\u9ad8\u5bc6\u5ea6\u8a00\u8a9e\u30e2\u30c7\u30eb\u306e\u30d5\u30a1\u30df\u30ea\u30fc\u3067\u3042\u308b\u3002 \u6c4e\u7528\u76ee\u7684\u306e\u305f\u3081\u306e\u4e8b\u524d\u8a13\u7df4\u3055\u308c\u305f\u30d9\u30fc\u30b9\u30e2\u30c7\u30eb\u3001\u5fae\u8abf\u6574\u3055\u308c\u305f\u547d\u4ee4\u30e2\u30c7\u30eb\u3001\u8907\u96d1\u306a\u554f\u984c\u89e3\u6c7a\u306e\u305f\u3081\u306e\u63a8\u8ad6\u30e2\u30c7\u30eb\u3067\u3042\u308b\u3002 \u5404\u30e2\u30c7\u30eb\u306f\u30a4\u30e1\u30fc\u30b8\u7406\u89e3\u6a5f\u80fd\u3092\u5099\u3048\u3066\u304a\u308a\u3001\u3059\u3079\u3066Apache 2.0\u30e9\u30a4\u30bb\u30f3\u30b9\u3067\u63d0\u4f9b\u3055\u308c\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2601.08584v1\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2601.08584v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Tue, 13 Jan 2026 14:06:03 GMT)<\/li>\n\n\n\n<li>Mistral\u304b\u3089\u306e\u767a\u8868\u3002\u300cA key component of Ministral 3 is our Cascade Distillation training strategy, an iterative pruning and distillation method, which progressively transfers pretrained knowledge from a large parent model down to a family of compact children models. Our recipe allows us to achieve performance that is competitive with models which had a much larger training budget.\u300d\u3068\u306e\u3053\u3068<\/li>\n\n\n\n<li>\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30b5\u30a4\u30c8\u306f<a href=\"https:\/\/mistral.ai\/news\/mistral-3\">Introducing Mistral 3 | Mistral AI<\/a>\u3001\u30e2\u30c7\u30eb\u306f<a href=\"https:\/\/huggingface.co\/collections\/mistralai\/ministral-3\">Ministral 3 &#8211; a mistralai Collection<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding&nbsp;<\/strong>[73.5]<br>Molmo2\u306f\u30d3\u30c7\u30aa\u8a00\u8a9e\u30e2\u30c7\u30eb(VLM)\u306e\u65b0\u305f\u306a\u30d5\u30a1\u30df\u30ea\u30fc\u3067\u3042\u308a\u3001\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u30e2\u30c7\u30eb\u306e\u4e2d\u3067\u3082\u6700\u5148\u7aef\u306e\u88fd\u54c1\u3067\u3042\u308b\u3002 \u5358\u4e00\u753b\u50cf\u3001\u30de\u30eb\u30c1\u30a4\u30e1\u30fc\u30b8\u3001\u30d3\u30c7\u30aa\u30bf\u30b9\u30af\u306b\u304a\u3051\u308b\u30dd\u30a4\u30f3\u30c8\u99c6\u52d5\u30b0\u30e9\u30a6\u30f3\u30c9\u30ea\u30f3\u30b0\u306b\u304a\u3044\u3066\u3001\u4f8b\u5916\u7684\u306a\u65b0\u6a5f\u80fd\u3092\u793a\u3059\u3002 \u79c1\u305f\u3061\u306e\u6700\u9ad8\u306e8B\u30e2\u30c7\u30eb\u306f\u3001\u30b7\u30e7\u30fc\u30c8\u30d3\u30c7\u30aa\u3001\u30ab\u30a6\u30f3\u30c8\u3001\u30ad\u30e3\u30d7\u30b7\u30e7\u30f3\u3067\u30aa\u30fc\u30d7\u30f3\u30a6\u30a7\u30a4\u30c8\u3068\u30c7\u30fc\u30bf\u30e2\u30c7\u30eb\u306e\u30af\u30e9\u30b9\u3067\u4ed6\u3088\u308a\u3082\u512a\u308c\u3066\u304a\u308a\u3001\u30ed\u30f3\u30b0\u30d3\u30c7\u30aa\u3067\u306f\u7af6\u4e89\u529b\u304c\u3042\u308a\u307e\u3059\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2601.10611v1\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2601.10611v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Thu, 15 Jan 2026 17:27:44 GMT)<\/li>\n\n\n\n<li>Ai2\u306e\u6700\u65b0VLM\u3001ver1\u304b\u3089\u5927\u304d\u304f\u6027\u80fd\u3092\u4e0a\u3052\u3066\u3044\u308b\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/allenai\/molmo2\">GitHub &#8211; allenai\/molmo2: Code for the Molmo2 Vision-Language Model<\/a>\u3001\u30e2\u30c7\u30eb\u306f<a href=\"https:\/\/github.com\/allenai\/molmo2\">GitHub &#8211; allenai\/molmo2: Code for the Molmo2 Vision-Language Model<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>STEP3-VL-10B Technical Report&nbsp;<\/strong>[115.9]<br>STEP3-VL-10B\u306f\u3001\u30b3\u30f3\u30d1\u30af\u30c8\u52b9\u7387\u3068\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u30ec\u30d9\u30eb\u306e\u30de\u30eb\u30c1\u30e2\u30fc\u30c0\u30eb\u30a4\u30f3\u30c6\u30ea\u30b8\u30a7\u30f3\u30b9\u3068\u306e\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u3092\u518d\u5b9a\u7fa9\u3059\u308b\u8efd\u91cf\u57fa\u76e4\u30e2\u30c7\u30eb\u3067\u3042\u308b\u3002 \u305d\u3053\u3067\u6211\u3005\u306fPallel Coordinated Reasoning(PaCoRe)\u3092\u5b9f\u88c5\u3057\u3066,\u30c6\u30b9\u30c8\u6642\u9593\u8a08\u7b97\u3092\u30b9\u30b1\u30fc\u30eb\u3057,\u30ea\u30bd\u30fc\u30b9\u3092\u30b9\u30b1\u30fc\u30e9\u30d6\u30eb\u306a\u77e5\u899a\u63a8\u8ad6\u306b\u5272\u308a\u5f53\u3066\u308b\u3002 MMBench\u3067\u306f92.2%\u3001MMMU\u3067\u306f80.11%\u3001AIME2025\u3067\u306f94.43%\u3001MathVision\u3067\u306f75.95%\u3067\u3042\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2601.09668v2\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2601.09668v2\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Thu, 15 Jan 2026 17:06:04 GMT)<\/li>\n\n\n\n<li>\u5c0f\u898f\u6a21\u306a\u304c\u3089\u5f37\u529b\u306a\u6027\u80fd\u306eVLM\u3002Qwen3 VL 235B A22\u306b\u5339\u6575\u3068\u4e3b\u5f35\u3002<\/li>\n\n\n\n<li>\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30b5\u30a4\u30c8\u306f<a href=\"https:\/\/stepfun-ai.github.io\/Step3-VL-10B\/\">Step3-VL-10B: Compact Yet Frontier Multimodal Intelligence<\/a>\u3001\u30e2\u30c7\u30eb\u306f<a href=\"https:\/\/huggingface.co\/stepfun-ai\/Step3-VL-10B\">stepfun-ai\/Step3-VL-10B \u00b7 Hugging Face<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5\u00a0<\/strong>[101.4]<br>GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, Seedream 4.5\u3002 \u30d9\u30f3\u30c1\u30de\u30fc\u30af\u8a55\u4fa1,\u5bfe\u89d2\u8a55\u4fa1,\u591a\u8a00\u8a9e\u8a55\u4fa1,\u30b3\u30f3\u30d7\u30e9\u30a4\u30a2\u30f3\u30b9\u8a55\u4fa1\u3092\u7d71\u5408\u3057\u305f\u7d71\u4e00\u30d7\u30ed\u30c8\u30b3\u30eb\u3092\u7528\u3044\u3066,\u8a00\u8a9e,\u8996\u899a\u8a00\u8a9e,\u753b\u50cf\u751f\u6210\u8a2d\u5b9a\u306e\u5404\u30e2\u30c7\u30eb\u3092\u8a55\u4fa1\u3059\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2601.10527v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2601.10527v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Thu, 15 Jan 2026 15:52:52 GMT)<\/li>\n\n\n\n<li>\u300cIn this report, we present an integrated safety evaluation of 7 frontier models: GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5. We eval- uate each model across language, vision\u2013language, and image generation settings us- ing a unified protocol that integrates benchmark evaluation, adversarial evaluation, multilingual evaluation, and compliance evaluation.\u300d\u3068MLLM\u3084\u753b\u50cf\u751f\u6210\u30e2\u30c7\u30eb\u306b\u95a2\u3059\u308b\u5b89\u5168\u6027\u8a55\u4fa1\u3002VLM\u3068\u3057\u3066\u306fGPT-5.2\u306e\u30b9\u30b3\u30a2\u306f\u3055\u3059\u304c\u3068\u3044\u3063\u305f\u3068\u3053\u308d\u3002<\/li>\n\n\n\n<li>\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30b5\u30a4\u30c8\u306f<a href=\"https:\/\/xsafeai.github.io\/AI-safety-report\/\">Safety Report: GPT-5.2, Gemini 3 Pro, Qwen3-VL, Nano Banana Pro, Seedream 4.5<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI\u306e\u5e83\u544a\u30e2\u30c7\u30eb\uff08ChatGPT Go \u304c\u767b\u5834\u3001\u4e16\u754c\u4e2d\u3067\u5229\u7528\u53ef\u80fd\u306b | OpenAI\uff09\u3001LLM\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9\u306e\u30aa\u30fc\u30d7\u30f3\u306a\u4ed5\u69d8\uff08Open Responses\u3001X\u30e6\u30fc\u30b6\u30fc\u306eOpenAI Developers\u3055\u3093: \u300c &hellip; <a href=\"https:\/\/devneko.jp\/wordpress\/?p=8090\" class=\"more-link\"><span class=\"screen-reader-text\">&#8220;Ministral 3, Molmo2, STEP3-VL&#8221; \u306e<\/span>\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[223,251],"class_list":["post-8090","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-llm","tag-mllm"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/8090","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8090"}],"version-history":[{"count":4,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/8090\/revisions"}],"predecessor-version":[{"id":8095,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/8090\/revisions\/8095"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8090"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8090"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8090"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}