「By mapping natural language, DNA/RNA/protein sequences, molecular strings, and materials representations into a shared backbone via task-aware tokenization and consistent input–output schemas, the model moves beyond narrow, discipline-specific solutions and limited task menus.」、と自然言語なLLMと科学的記述を統合する取り組み。「The model is pretrained on a 206B-token corpus spanning scientific text, pure sequences, and sequence–text pairs, then aligned via SFT on 40M instructions, annealed cold-start bootstrapping to elicit long-form chain-of-thought, and reinforcement learning with task-specific reward shaping, which instills deliberate scientific reasoning.」と正面突破なアプローチ。