6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article explains how to set up OpenCode with Docker Model Runner for a private AI coding assistant. It covers configuration, model selection, and the benefits of maintaining control over data and costs. The guide also highlights coding-specific models that enhance development workflows.
If you do, here's more
AI coding assistants are becoming essential in development but raise concerns about data privacy and access. The combination of OpenCode, an open-source coding assistant, and Docker Model Runner (DMR) offers a solution that allows developers to maintain control over their data and infrastructure. OpenCode integrates with multiple model providers and features a flexible configuration, while DMR simplifies running and managing large language models through an OpenAI-compatible API.
Configuring OpenCode involves using a JSON file, which can be set up globally or on a project-specific basis. To integrate OpenCode with DMR, developers need to direct their configuration file to the DMR server. For instance, setting the `baseURL` to `http://localhost:12434/v1` allows OpenCode to access locally hosted models, ensuring that no external AI providers handle sensitive code, thus enhancing privacy. Moreover, running models on one's hardware eliminates per-token fees and unexpected costs associated with cloud-hosted solutions.
Model choice directly impacts coding quality and effectiveness. OpenCode works best with coding-focused models like qwen3-coder and devstral-small-2, which can handle large context lengths—up to 128K tokens—critical for complex coding tasks. In contrast, general-purpose models like gpt-oss have a smaller context size of 4,096 tokens. Developers can even modify the context size for gpt-oss to better suit their needs, allowing for more extensive context during code generation.
The ability to package models as OCI artifacts adds value by enabling sharing across teams and maintaining consistency during development. This standardization simplifies the process of managing models, allowing teams to reuse them without individual configurations. With the right setup, developers can efficiently use AI-assisted coding tools while controlling costs and ensuring data privacy.
Questions about this article
No questions yet.