Ollama has introduced a new engine that supports multimodal models, emphasizing improved accuracy, model modularity, and memory management. The update allows for better integration of vision and text models, enhancing the capabilities of local inference for various applications, including image recognition and reasoning. Future developments will focus on supporting longer context sizes and enabling advanced functionalities.
Ollama has launched a new web search API that enhances its models by providing access to the latest information, thereby improving accuracy and reducing hallucinations. The API is available with a free tier, and users can integrate it into projects using Python and JavaScript libraries for efficient web searches and research tasks.