3 links
tagged with all of: api + gemini
Click any tag below to further narrow down your results
Links
Google has launched the Gemini Embedding model (gemini-embedding-001), now available to developers via the Gemini API and Vertex AI, showcasing superior performance on the Massive Text Embedding Benchmark. This versatile model supports over 100 languages and features flexible output dimensions, allowing developers to optimize for performance and cost. Users are encouraged to migrate from older models before their deprecation dates, with enhanced features like Batch API support coming soon.
The Gemini Batch API now supports the new Gemini Embedding model and offers compatibility with the OpenAI SDK for batch processing. This enhancement allows developers to utilize the model at a significantly lower cost and higher rate limits, facilitating cost-sensitive and latency-tolerant use cases. A few lines of code are all that's needed to get started with batch embeddings or to switch from OpenAI SDK compatibility.
Google has launched an early preview of Gemini 2.5 Flash, enhancing reasoning capabilities while maintaining speed and cost efficiency. This hybrid reasoning model allows developers to control the thinking process and budget, resulting in improved performance for complex tasks. The model is now available through the Gemini API in Google AI Studio and Vertex AI, encouraging experimentation with its features.