Click any tag below to further narrow down your results
Links
The launch of Gemini 3 has demonstrated significant performance improvements over its predecessor, Gemini 2.5, despite having the same parameter count. This, along with Nvidia's strong earnings report, suggests that pre-training scaling laws remain effective when combined with algorithmic advancements and improved compute power. Together, these developments challenge the notion that AI model performance has plateaued.
This article explores how the performance of language model-based agent systems can be quantitatively analyzed. It identifies key scaling laws and coordination strategies through experiments with various agent architectures, revealing insights on tool coordination, capability saturation, and error amplification. The findings help predict optimal coordination strategies for different tasks.