When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners
When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners [111.5] 言語固有のアブレーションは多言語推論性能を継続的に向上させることを示す。 トレーニング後のアブレーションと比較して、トレーニング不要のアブレーションは、計算オーバーヘッドを最小限に抑えながら、同等または優れた結果が得られる。 論文参考訳(メタデータ) (Wed, 21 May 2025 08:35:05 GMT)
「Drawing inspiration from cognitive neuroscience, which suggests that human reasoning functions largely independently of language processing, we hypothesize that LLMs similarly encode reasoning and language as separable components that can be disentangled to enhance multilingual reasoning」に基づき、「Through targeted interventions in the LLMs’ activation space, we demonstrate that removing language-specific information significantly improves reasoning performance across languages.」とのこと。