The article presents EntropyLong, a novel method for training long-context language models by utilizing predictive uncertainty to verify the quality of long-range dependencies. This approach constructs training samples by combining original documents with semantically relevant contexts, leading to significant improvements in tasks requiring distant information according to the RULER benchmarks and LongBenchv2. The study emphasizes the effectiveness of entropy-based verification in enhancing long-context understanding in machine learning models.