Click any tag below to further narrow down your results
Links
The article discusses how FlashAttention 4 improves performance on NVIDIA's Blackwell architecture by addressing compute and memory bottlenecks. It highlights the technical enhancements that enable more efficient processing in machine learning tasks.
NVIDIA's new GB200 NVL72 AI cluster has increased the performance of Mixture of Experts (MoE) models by ten times compared to its previous generation. This boost is attributed to a co-design approach that enhances parallel processing and optimizes resource allocation for AI tasks. The Kimi K2 Thinking model, tested on this architecture, showcases significant improvements in efficiency and capability.
This article discusses the evolution of Nvidia's architectures from Volta to Blackwell, highlighting strengths and weaknesses. It also examines performance trade-offs and potential future developments in the Vera Rubin architecture. The insights stem from a combination of practical experience and recent industry discussions.
The launch of Gemini 3 has demonstrated significant performance improvements over its predecessor, Gemini 2.5, despite having the same parameter count. This, along with Nvidia's strong earnings report, suggests that pre-training scaling laws remain effective when combined with algorithmic advancements and improved compute power. Together, these developments challenge the notion that AI model performance has plateaued.
The NVIDIA HGX B200, now available in the Cirrascale AI Innovation Cloud, offers an 8-GPU configuration that significantly enhances AI performance, achieving up to 15X faster inference compared to the previous generation. With advanced features such as the second-generation Transformer Engine and NVLink interconnect, it is designed for demanding AI and HPC workloads, ensuring efficient scalability and lower operational costs.
Perplexity evaluates OpenAI's newly released open-weight models, gpt-oss-20b and gpt-oss-120b, focusing on their implementation on NVIDIA H200 GPUs. The article discusses infrastructure decisions, kernel modifications, and performance optimizations made to efficiently integrate these models into their inference engine, ROSE.
Cerebras Systems has boasted about outperforming Nvidia's Blackwell architecture, claiming superior performance in AI tasks. The company highlights advancements in its Wafer Scale Engine technology that enable extensive parallel processing capabilities, which they believe set them apart in the competitive landscape of AI hardware.