Click any tag below to further narrow down your results
Links
This article explains how to create and manage reusable skills in OpenAI's API. Skills are packaged bundles of files that enable repeatable workflows in hosted or local environments. It covers when to use skills, how to structure them, and the API calls necessary for uploading and utilizing them.
This article covers several recent updates to OpenAI's Codex and Responses API. Key highlights include the introduction of skills for Codex, new multi-provider support, and enhancements like Gmail and Google Calendar integration. The updates aim to improve usability and efficiency for developers.
This article explains how to create and manage reusable skills in OpenAI's API. Skills are bundles of files that allow for repeatable workflows, code execution, and asset management, making them useful for structured tasks. The guide details when to use skills, how to package them, and the process for uploading and invoking them via the API.
This article discusses key points from a recent OpenAI Town Hall featuring Sam Altman. Notably, it highlights a suggestion about allowing users to share inference costs from their ChatGPT accounts with other apps, which could enhance security and ease of use for indie developers.
OpenAI announced several updates, including Open Responses, an open-source spec for building multi-provider LLM interfaces. The introduction of GPT-5.2-Codex enhances complex coding tasks, while new skills and connectors improve usability and integration with other platforms.
OpenAI has cut ties with Mixpanel following a data breach that exposed user profile information linked to its API. While typical ChatGPT users are not affected, OpenAI is notifying impacted API users and reviewing security measures across its vendors. The breach involved names, emails, and location data, raising concerns about potential phishing attempts.
LiteLLM is a lightweight proxy server designed to facilitate calls to various LLM APIs using a consistent OpenAI-like format, managing input translation and providing robust features like retry logic, budget management, and logging capabilities. It supports multiple providers, including OpenAI, Azure, and Huggingface, and offers both synchronous and asynchronous interaction models. Users can easily set up and configure the service through Docker and environment variables for secure API key management.
OpenAI's gpt-oss models utilize the harmony response format to structure conversation outputs, reasoning, and function calls. This format allows for flexible output channels and is designed to integrate seamlessly with existing APIs, while custom implementations can follow the provided guide for proper formatting. Users are encouraged to refer to the documentation for comprehensive instructions on using the format effectively.
The Gemini Batch API now supports the new Gemini Embedding model and offers compatibility with the OpenAI SDK for batch processing. This enhancement allows developers to utilize the model at a significantly lower cost and higher rate limits, facilitating cost-sensitive and latency-tolerant use cases. A few lines of code are all that's needed to get started with batch embeddings or to switch from OpenAI SDK compatibility.
The article discusses the functionality and features of the OpenAI Responses API, highlighting how it can be utilized for generating text responses in various applications. It emphasizes the API's versatility, ease of integration, and potential use cases across different domains.
OpenAI has introduced the `gpt-image-1` model for image generation via its API, allowing developers to integrate high-quality image creation into their products. The model supports diverse styles and applications, with notable collaborations from companies like Adobe, Canva, and HubSpot to enhance creative and marketing processes.
OpenRouter allows users to create an account and obtain an API key to access various AI models through a unified interface, compatible with OpenAI. Users benefit from low latency and reliable performance while managing costs effectively. Each customer receives 1 million free requests per month under the Bring Your Own Key (BYOK) program.
An OpenAI-compatible API can be effectively deployed using AWS Lambda and an Application Load Balancer (ALB) to bypass the limitations of API Gateway's authentication requirements. By setting up the ALB to route traffic directly to the Lambda function, developers can maintain a seamless integration with the OpenAI Python client, ensuring a consistent API experience. This approach offers flexibility and security when exposing custom AI services.
RubyLLM is a streamlined Ruby API designed for interfacing with various AI models, including GPT, Claude, and Gemini, making it easier to build chatbots and AI applications without the hassle of managing multiple client libraries. It supports various functionalities such as image analysis, audio transcription, document extraction, and real-time response streaming, all while requiring minimal dependencies. Users can easily integrate this API into their applications by adding a simple gem and configuring their API keys.