Click any tag below to further narrow down your results
Links
The article reviews GPT-5.2, highlighting that while it has notable improvements in instruction-following and complex task handling, its performance is slower than expected. The author compares it to other models like Claude Opus 4.5 and Gemini 3, noting that it may not be the best choice for all use cases, especially in coding or when a more engaging personality is desired.
The article explores the effectiveness and potential benefits of OpenAI's Reinforcement Fine-Tuning (RFT) for enhancing model performance. It discusses various applications, challenges, and considerations for implementing RFT in AI systems, helping readers assess its value for their projects.
Perplexity evaluates OpenAI's newly released open-weight models, gpt-oss-20b and gpt-oss-120b, focusing on their implementation on NVIDIA H200 GPUs. The article discusses infrastructure decisions, kernel modifications, and performance optimizations made to efficiently integrate these models into their inference engine, ROSE.