5 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Mercari's AI Security team created the LLM Key Server to streamline access to LLM APIs. This service allows users to obtain temporary API keys without manual requests, enhancing security while simplifying access for developers and non-developers alike.
If you do, here's more
Mercari is enhancing its use of AI and large language models (LLMs) through the development of the LLM Key Server, which streamlines access to LLM APIs. Previously, users had to request API access manually, but now they can obtain temporary API keys directly from their internal accounts. This shift aims to balance security and usability, addressing the risks associated with static API keys that can be leaked, leading to potential security breaches and financial losses.
The LLM Key Server relies on the open-source project LiteLLM, which provides a unified API for accessing various LLM models. It incorporates Google Workspaceβs OpenID Connect (OIDC) for identity verification, ensuring that only authorized users can access the APIs. The system issues short-lived API keys that expire quickly to minimize risks. For applications that require longer usage periods, an automatic key renewal mechanism is in place, allowing seamless access without constant manual intervention.
The architecture supports multiple use cases, particularly in GitHub Actions and Google Apps Script. For GitHub Actions, a template allows developers to securely fetch LLM API keys without direct management of those keys, facilitating integration into CI/CD pipelines. In Google Apps Script, the setup involves configuring OAuth scopes to authenticate users, enabling them to retrieve the LLM API key and access models through LiteLLM. This approach not only simplifies the process but also reinforces security by leveraging existing Google infrastructure.
Questions about this article
No questions yet.