{"id":6650,"date":"2025-05-01T05:24:00","date_gmt":"2025-04-30T20:24:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=6650"},"modified":"2025-05-01T05:24:00","modified_gmt":"2025-04-30T20:24:00","slug":"multilingual-performance-biases-of-large-language-models-in-education","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=6650","title":{"rendered":"Multilingual Performance Biases of Large Language Models in Education\u00a0"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>Multilingual Performance Biases of Large Language Models in Education\u00a0<\/strong>[39.1]<br>\u5927\u898f\u6a21\u8a00\u8a9e\u30e2\u30c7\u30eb(LLM)\u306f\u3001\u6559\u80b2\u74b0\u5883\u306b\u304a\u3044\u3066\u307e\u3059\u307e\u3059\u63a1\u7528\u3055\u308c\u3066\u3044\u308b\u3002 \u3053\u306e\u7814\u7a76\u306f\u3001\u975e\u82f1\u8a9e\u306e\u6559\u80b2\u74b0\u5883\u3067\u306e\u4f7f\u7528\u304c\u4fdd\u8a3c\u3055\u308c\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u304b\u3081\u308b\u3082\u306e\u3067\u3042\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2504.17720v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2504.17720v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Thu, 24 Apr 2025 16:32:31 GMT)<\/li>\n\n\n\n<li>\u300cHowever, we note that certain models can do terribly on some tasks and languages, so we recommend first verifying that a particular model works well in a particular language on a specific education-related task before deployment.\u300d\u3068\u3044\u3046\u307e\u3063\u3068\u3046\u306a\u6307\u6458\u306f\u3042\u308b\u3082\u306e\u306e\u3001\u300cParticularly, we find that GPT4o and Gemini 2.0 perform consistently well across all languages with a few exceptions.\u300d\u3068\u591a\u8a00\u8a9e\u5bfe\u5fdc\u306f\u304b\u306a\u308a\u9032\u3093\u3067\u3044\u308b\u96f0\u56f2\u6c17\u3092\u611f\u3058\u308b\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/eth-lre\/multilingual-educational-llm-bias\">GitHub &#8211; eth-lre\/multilingual-educational-llm-bias: Data and code for &#8220;Multilingual Performance Biases of Large Language Models in Education&#8221;<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[267,587],"class_list":["post-6650","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-multilingual","tag-587"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6650","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6650"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/6650\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6650"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6650"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6650"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}