Canonical has introduced silicon-optimized inference snaps for deploying AI models on Ubuntu devices, allowing users to automatically select the best model configurations based on their hardware. This simplifies the process for developers by eliminating the need to manually choose model sizes and optimizations, thereby enhancing efficiency and performance across various devices. The public beta includes models optimized for Intel and Ampere hardware, facilitating seamless integration of AI capabilities into applications.
+ ai
ubuntu ✓
optimization ✓