Researchers demonstrated the use of torchft and torchtitan for training a model under extreme synthetic failure rates, achieving fault tolerance without relying on checkpoints. By employing a novel asynchronous weight transfer method, they successfully isolated failures and maintained training continuity across multiple GPU groups.