Large language models derive from decades of accessible text, but their data consumption outpaces human production, leading to a need for self-generated experiences in AI. The article discusses the importance of exploration in reinforcement learning and how better exploration can enhance generalization in models, highlighting the role of pretraining in solving exploration challenges. It emphasizes that the future of AI progress will focus more on collecting the right experiences rather than merely increasing model capacity.
A novel actor-critic algorithm is introduced that achieves optimal sample efficiency in reinforcement learning, attaining a sample complexity of \(O(dH^5 \log|\mathcal{A}|/\epsilon^2 + d H^4 \log|\mathcal{F}|/\epsilon^2)\). This algorithm integrates optimism and off-policy critic estimation, and is extended to Hybrid RL, demonstrating efficiency gains when utilizing offline data. Numerical experiments support the theoretical findings of the study.