The research introduces a paradigm called "early experience," where language agents learn from their own actions without relying on reward signals. By employing strategies such as implicit world modeling and self-reflection, the agents demonstrate improved performance and generalization across diverse environments, serving as a bridge between imitation learning and reinforcement learning. The findings highlight the effectiveness of early experience in agent training and its potential for enhancing learning in complex tasks.