Interleaved Speech Language Models (SLMs) demonstrate improved scaling efficiency compared to traditional textless SLMs, according to a comprehensive scaling analysis. By leveraging knowledge transfer from pre-trained Text Language Models and utilizing synthetic data, the study reveals that interleaved SLMs can achieve comparable performance with less compute and data, suggesting a shift in resource allocation strategies for model training.