CollabLLM is a framework that enables training of collaborative language models to enhance multi-turn conversations by computing multiturn-aware rewards. Users can easily set up their environment, generate synthetic data, and customize metrics and datasets for specific tasks. The project aims to shift language models from passive responders to active collaborators in interactive settings.