Reinforcement Pre-Training (RPT) is introduced as a novel approach for enhancing large language models through reinforcement learning by treating next-token prediction as a reasoning task. RPT utilizes vast text data to improve language modeling accuracy and provides a strong foundation for subsequent reinforcement fine-tuning, demonstrating consistent improvements in prediction accuracy with increased training compute.
Reinforcement Learning on Pre-Training Data (RLPT) introduces a new paradigm for scaling large language models (LLMs) by allowing the policy to autonomously explore meaningful trajectories from pre-training data without relying on human annotations for rewards. By adopting a next-segment reasoning objective, RLPT improves LLM capabilities, as demonstrated by significant performance gains on various reasoning benchmarks and encouraging broader context exploration for enhanced generalization.