MiniMax-M1 is a groundbreaking open-weight hybrid-attention reasoning model featuring a Mixture-of-Experts architecture and lightning attention mechanism, optimized for handling complex tasks with long inputs. It excels in various benchmarks, particularly in mathematics, software engineering, and long-context understanding, outperforming existing models with efficient test-time compute scaling. The model is trained through large-scale reinforcement learning and offers function calling capabilities, positioning it as a robust tool for next-generation AI applications.
minimax ✓
hybrid-attention ✓
reinforcement-learning ✓
+ large-scale
software-engineering ✓