{"id":7834,"date":"2025-12-01T06:08:00","date_gmt":"2025-11-30T21:08:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=7834"},"modified":"2025-11-29T17:34:20","modified_gmt":"2025-11-29T08:34:20","slug":"opus-4-5-dr-tulu-qwen3-vl-hunyuanvideo-1-5","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=7834","title":{"rendered":"Claude Opus 4.5, DeepSeekMath-V2, DR Tulu, Qwen3-VL, HunyuanVideo 1.5"},"content":{"rendered":"\n<p>\u5148\u9031\u306fOpus 4.5\u306e\u767a\u8868\uff08<a href=\"https:\/\/www.anthropic.com\/news\/claude-opus-4-5\">Introducing Claude Opus 4.5 \\ Anthropic<\/a>\uff09\u304c\u3042\u308a\u3001Anthropic Clode\u304c\u7279\u306b\u30b3\u30fc\u30c9\u751f\u6210\u306b\u304a\u3044\u3066\u3055\u3059\u304c\u306e\u6027\u80fd\u3092\u898b\u305b\u305f\u3002<\/p>\n\n\n\n<p>\u516c\u958b\u30e2\u30c7\u30eb\u95a2\u9023\u3067\u306f\u6570\u5b66\u306b\u5f37\u3044DeepSeekMath-V2\uff08<a href=\"https:\/\/huggingface.co\/deepseek-ai\/DeepSeek-Math-V2\">deepseek-ai\/DeepSeek-Math-V2 \u00b7 Hugging Face<\/a>\uff09\u3001Deep Research\u306b\u5f37\u3044DR Tulu\uff08<a href=\"https:\/\/allenai.org\/blog\/dr-tulu\">DR Tulu: An open, end-to-end training recipe for long-form deep research | Ai2<\/a>\uff09\u3084Qwen3-VL\u3001HunyuanVideo 1.5\u306e\u30c6\u30af\u30cb\u30ab\u30eb\u30ec\u30dd\u30fc\u30c8\u306b\u6ce8\u76ee\u3068\u3044\u3046\u72b6\u6cc1\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research&nbsp;<\/strong>[152.2]<br>\u30c7\u30a3\u30fc\u30d7\u30fb\u30ea\u30b5\u30fc\u30c1\u30fb\u30e2\u30c7\u30eb\u306f\u3001\u591a\u6bb5\u968e\u306e\u7814\u7a76\u3092\u884c\u3044\u3001\u9577\u6587\u3067\u3088\u304f\u7406\u89e3\u3055\u308c\u305f\u56de\u7b54\u3092\u751f\u6210\u3059\u308b\u3002 \u307b\u3068\u3093\u3069\u306e\u30aa\u30fc\u30d7\u30f3\u30c7\u30a3\u30fc\u30d7\u30ea\u30b5\u30fc\u30c1\u30e2\u30c7\u30eb\u306f\u3001\u691c\u8a3c\u53ef\u80fd\u306a\u5831\u916c\u3092\u4f34\u3046\u5f37\u5316\u5b66\u7fd2\u3092\u901a\u3058\u3066\u3001\u77ed\u3044\u5f62\u5f0f\u306eQA\u30bf\u30b9\u30af\u3067\u8a13\u7df4\u3055\u308c\u3066\u3044\u308b\u3002 \u6211\u3005\u306f\u3001\u30aa\u30fc\u30d7\u30f3\u30a8\u30f3\u30c9\u3067\u9577\u671f\u306e\u30c7\u30a3\u30fc\u30d7\u30ea\u30b5\u30fc\u30c1\u306e\u305f\u3081\u306b\u76f4\u63a5\u8a13\u7df4\u3055\u308c\u305f\u6700\u521d\u306e\u30aa\u30fc\u30d7\u30f3\u30e2\u30c7\u30eb\u3067\u3042\u308bDeep Research Tulu (DR Tulu-8B)\u3092\u958b\u767a\u3057\u305f\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2511.19399v2\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2511.19399v2\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Wed, 26 Nov 2025 14:52:10 GMT)<\/li>\n\n\n\n<li>\u300cIn this paper, we introduce Deep Research Tulu (DR Tulu-8B), the first open model that is directly trained for open-ended, long-form deep research tasks. To address the challenge of verification in long-form tasks, DR Tulu is first finetuned on high-quality, naturally occurring user data, and then trained via a new method we call Reinforcement Learning with Evolving Rubrics (RLER), in which we construct and maintain rubrics that co-evolve with the policy model during training.\u300d\u3068DeepResearch\u306b\u7279\u5316\u3057\u305f\u30e2\u30c7\u30eb\u306e\u63d0\u6848\u3002\u5f37\u5316\u5b66\u7fd2\u90e8\u5206\u3082\u8208\u5473\u6df1\u3044\u69cb\u6210\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/rlresearch\/dr-tulu\">GitHub &#8211; rlresearch\/dr-tulu: Official repository for DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Qwen3-VL Technical Report&nbsp;<\/strong>[153.4]<br>Qwen3-VL\u306f\u3001\u3053\u308c\u307e\u3067\u3067\u6700\u3082\u6709\u80fd\u306a\u8996\u899a\u8a00\u8a9e\u30e2\u30c7\u30eb\u3067\u3042\u308a\u3001\u5e45\u5e83\u3044\u30de\u30eb\u30c1\u30e2\u30fc\u30c0\u30eb\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u3067\u512a\u308c\u305f\u6027\u80fd\u3092\u5b9f\u73fe\u3057\u3066\u3044\u308b\u3002 \u6700\u5927256K\u30c8\u30fc\u30af\u30f3\u306e\u30a4\u30f3\u30bf\u30fc\u30ea\u30fc\u30d6\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u3092\u30b5\u30dd\u30fc\u30c8\u3057\u3001\u30c6\u30ad\u30b9\u30c8\u3001\u753b\u50cf\u3001\u30d3\u30c7\u30aa\u3092\u30b7\u30fc\u30e0\u30ec\u30b9\u306b\u7d71\u5408\u3059\u308b\u3002 Qwen3-VL\u306f3\u3064\u306e\u4e2d\u6838\u67f1\u3092\u63d0\u4f9b\u3059\u308b: (i) \u975e\u5e38\u306b\u5f37\u3044\u7d14\u7c8b\u30c6\u30ad\u30b9\u30c8\u7406\u89e3\u3001\u3044\u304f\u3064\u304b\u306e\u30b1\u30fc\u30b9\u306b\u304a\u3044\u3066\u540c\u7b49\u306e\u30c6\u30ad\u30b9\u30c8\u306e\u307f\u306e\u30d0\u30c3\u30af\u30dc\u30fc\u30f3\u3092\u8d85\u3048\u308b\u3001 (ii) \u30c6\u30ad\u30b9\u30c8\u5165\u529b\u3068\u30a4\u30f3\u30bf\u30fc\u30ea\u30fc\u30d6\u3055\u308c\u305f\u30de\u30eb\u30c1\u30e2\u30fc\u30c0\u30eb\u5165\u529b\u306e\u4e21\u65b9\u306b256K\u306e\u30cd\u30a4\u30c6\u30a3\u30d6\u30a6\u30a3\u30f3\u30c9\u30a6\u3092\u6301\u3064\u5805\u7262\u306a\u9577\u671f\u7406\u89e3\u3001 (iii) \u30b7\u30f3\u30b0\u30eb\u30a4\u30e1\u30fc\u30b8\u3001\u30de\u30eb\u30c1\u30a4\u30e1\u30fc\u30b8\u3001\u30d3\u30c7\u30aa\u30bf\u30b9\u30af\u3092\u307e\u305f\u3044\u3060\u9ad8\u5ea6\u306a\u30de\u30eb\u30c1\u30e2\u30fc\u30c0\u30eb\u63a8\u8ad6\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2511.21631v1\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2511.21631v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Wed, 26 Nov 2025 17:59:08 GMT)<\/li>\n\n\n\n<li>\u300cThe Qwen3-VL framework integrates a vision encoder and a language model decoder to process multimodal inputs, including text, images, and video. The vision encoder is specifically designed to handle dynamic, native-resolution visual inputs, mapping them to visual tokens of variable length.\u300d\u3068\u3044\u3046\u69cb\u6210\u3001\u5546\u7528\u30e2\u30c7\u30eb\u3068\u6bd4\u8f03\u53ef\u80fd\u306a\u6027\u80fd\u3001\u4e00\u90e8\u306f\u4e0a\u56de\u308b\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/QwenLM\/Qwen3-VL\">GitHub &#8211; QwenLM\/Qwen3-VL: Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>HunyuanVideo 1.5 Technical Report&nbsp;<\/strong>[97.0]<br>HunyuanVideo 1.5\u306f\u8efd\u91cf\u3060\u304c\u5f37\u529b\u306a\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u30d3\u30c7\u30aa\u751f\u6210\u30e2\u30c7\u30eb\u3067\u3042\u308b\u3002 \u6700\u5148\u7aef\u306e\u30d3\u30b8\u30e5\u30a2\u30eb\u54c1\u8cea\u3068\u30e2\u30fc\u30b7\u30e7\u30f3\u30b3\u30d2\u30fc\u30ec\u30f3\u30b9\u3092\u3001\u308f\u305a\u304b830\u5104\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u9054\u6210\u3057\u3066\u3044\u308b\u3002 \u3059\u3079\u3066\u306e\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u8cc7\u7523\u306fhttps:\/\/github.com\/Tencent-Hunyuan\/HunyuanVideo-1.5\u3067\u516c\u958b\u3055\u308c\u3066\u3044\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2511.18870v2\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2511.18870v2\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Tue, 25 Nov 2025 02:52:10 GMT)<\/li>\n\n\n\n<li>\u30d3\u30c7\u30aa\u751f\u6210\u306a\u516c\u958b\u30e2\u30c7\u30eb<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/Tencent-Hunyuan\/HunyuanVideo-1.5\">GitHub &#8211; Tencent-Hunyuan\/HunyuanVideo-1.5: HunyuanVideo-1.5: A leading lightweight video generation model<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>\u5148\u9031\u306fOpus 4.5\u306e\u767a\u8868\uff08Introducing Claude Opus 4.5 \\ Anthropic\uff09\u304c\u3042\u308a\u3001Anthropic Clode\u304c\u7279\u306b\u30b3\u30fc\u30c9\u751f\u6210\u306b\u304a\u3044\u3066\u3055\u3059\u304c\u306e\u6027\u80fd\u3092\u898b\u305b\u305f\u3002 \u516c\u958b\u30e2\u30c7\u30eb\u95a2\u9023\u3067\u306f\u6570\u5b66\u306b &hellip; <a href=\"https:\/\/devneko.jp\/wordpress\/?p=7834\" class=\"more-link\"><span class=\"screen-reader-text\">&#8220;Claude Opus 4.5, DeepSeekMath-V2, DR Tulu, Qwen3-VL, HunyuanVideo 1.5&#8221; \u306e<\/span>\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[686,223,232],"class_list":["post-7834","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-deepresearch","tag-llm","tag-lrm"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7834","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7834"}],"version-history":[{"count":3,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7834\/revisions"}],"predecessor-version":[{"id":7857,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/7834\/revisions\/7857"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7834"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7834"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7834"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}