Click any tag below to further narrow down your results
Links
This article explains how to fine-tune a language model using your LinkedIn posts. It details the steps to gather, format, and train the model, allowing it to generate content in your voice. The author shares their experience and offers tips for customization.
This article discusses how fine-tuning open-source LLM judges using Direct Preference Optimization (DPO) can lead to performance that matches or exceeds GPT-5.2 in evaluating model outputs. The authors trained models like GPT-OSS 120B and Qwen 3 235B on human preference data, achieving better accuracy and efficiency at a lower cost.
Fine-tuning an instruction-tuned LLM (Qwen2.5B) for reasoning tasks is achieved using a cost-effective pipeline inspired by DeepSeek R1, implementing Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO) on AWS SageMaker. The article details the training stages, reward function design, and experimental outcomes, providing guidance for replicating the results and utilizing the associated codebase.