「1) Supervised Fine-Tuning (SFT) wherein we fine-tune EFMs using behavioral cloning as well as “steps-to-go” prediction objectives, and 2) Self-Improvement (Online RL) wherein EFMs autonomously practice downstream tasks and rapidly improve via optimizing self-predicted rewards.」というアプローチの提案(EFM= Embodied Foundation Models)。「Finally, we demonstrated that this novel combination uniquely unlocks a capability not possible by current methods: autonomously aquiring new skills that generalize far beyond the tasks covered in the imitation learning datasets. These findings highlight the transformative potential of combining pretrained foundation models with online Self- Improvement to enable autonomous skill acquisition in robotics.」と効果があったとのこと。