|
| 1 | +--- |
| 2 | +title: Model muxing |
| 3 | +description: Configure a per-workspace LLM |
| 4 | +sidebar_position: 35 |
| 5 | +--- |
| 6 | + |
| 7 | +## Overview |
| 8 | + |
| 9 | +_Model muxing_ (or multiplexing), allows you to configure your AI assistant once |
| 10 | +and use [CodeGate workspaces](./workspaces.mdx) to switch between LLM providers |
| 11 | +and models without reconfiguring your development environment. This feature is |
| 12 | +especially useful when you're working on multiple projects or tasks that require |
| 13 | +different AI models. |
| 14 | + |
| 15 | +For each CodeGate workspace, you can select the AI provider and model |
| 16 | +combination you want to use. Then, configure your AI coding tool to use the |
| 17 | +CodeGate muxing endpoint `http://localhost:8989/v1/mux` as an OpenAI-compatible |
| 18 | +API provider. |
| 19 | + |
| 20 | +To change the model currently in use, simply switch your active CodeGate |
| 21 | +workspace. |
| 22 | + |
| 23 | +```mermaid |
| 24 | +flowchart LR |
| 25 | + Client(AI Assistant/Agent) |
| 26 | + CodeGate{CodeGate} |
| 27 | + WS1[Workspace-A] |
| 28 | + WS2[Workspace-B] |
| 29 | + WS3[Workspace-C] |
| 30 | + LLM1(OpenAI/<br>o3-mini) |
| 31 | + LLM2(Ollama/<br>deepseek-r1) |
| 32 | + LLM3(OpenRouter/<br>claude-35-sonnet) |
| 33 | +
|
| 34 | + Client ---|/v1/mux| CodeGate |
| 35 | + CodeGate --> WS1 |
| 36 | + CodeGate --> WS2 |
| 37 | + CodeGate --> WS3 |
| 38 | + WS1 --> |api| LLM1 |
| 39 | + WS2 --> |api| LLM2 |
| 40 | + WS3 --> |api| LLM3 |
| 41 | +``` |
| 42 | + |
| 43 | +## Use cases |
| 44 | + |
| 45 | +- You have a project that requires a specific model for a particular task, but |
| 46 | + you also need to switch between different models during the course of your |
| 47 | + work. |
| 48 | +- You want to experiment with different LLM providers and models without having |
| 49 | + to reconfigure your AI assistant/agent every time you switch. |
| 50 | +- Your AI coding assistant doesn't support a particular provider or model that |
| 51 | + you want to use. CodeGate's muxing provides an OpenAI-compatible abstraction |
| 52 | + layer. |
| 53 | +- You're working on a sensitive project and want to use a local model, but still |
| 54 | + have the flexibility to switch to hosted models for other work. |
| 55 | +- You want to control your LLM provider spend by using lower-cost models for |
| 56 | + some tasks that don't require the power of more advanced (and expensive) |
| 57 | + reasoning models. |
| 58 | + |
| 59 | +## Configure muxing |
| 60 | + |
| 61 | +To use muxing with your AI coding assistant, you need to add one or more AI |
| 62 | +providers to CodeGate, then select the model you want to use on a workspace. |
| 63 | + |
| 64 | +CodeGate supports the following LLM providers for muxing: |
| 65 | + |
| 66 | +- Anthropic |
| 67 | +- llama.cpp |
| 68 | +- LM Studio |
| 69 | +- Ollama |
| 70 | +- OpenAI (and compatible APIs) |
| 71 | +- OpenRouter |
| 72 | +- vLLM |
| 73 | + |
| 74 | +### Add a provider |
| 75 | + |
| 76 | +1. In the [CodeGate dashboard](http://localhost:9090), open the **Providers** |
| 77 | + page from the **Settings** menu. |
| 78 | +1. Click **Add Provider**. |
| 79 | +1. Enter a display name for the provider, then select the type from the |
| 80 | + drop-down list. The default endpoint and authentication type are filled in |
| 81 | + automatically. |
| 82 | +1. If you are using a non-default endpoint, update the **Endpoint** value. |
| 83 | +1. Optionally, add a **Description** for the provider. |
| 84 | +1. If the provider requires authentication, select the **API Key** |
| 85 | + authentication option and enter your key. |
| 86 | + |
| 87 | +When you save the settings, CodeGate connects to the provider to retrieve the |
| 88 | +available models. |
| 89 | + |
| 90 | +:::note |
| 91 | + |
| 92 | +For locally-hosted models, you must use `http://host.docker.internal` instead of |
| 93 | +`http://localhost` |
| 94 | + |
| 95 | +::: |
| 96 | + |
| 97 | +### Select the model for a workspace |
| 98 | + |
| 99 | +Open the settings of one of your [workspaces](./workspaces.mdx) from the |
| 100 | +Workspace selection menu or the |
| 101 | +[Manage Workspaces](http://localhost:9090/workspaces) screen. |
| 102 | + |
| 103 | +In the **Preferred Model** section, select the model to use with the workspace. |
| 104 | + |
| 105 | +### Manage existing providers |
| 106 | + |
| 107 | +To edit a provider's settings, click the Manage button next to the provider in |
| 108 | +the list. For providers that require authentication, you can leave the API key |
| 109 | +field blank to preserve the current value. |
| 110 | + |
| 111 | +To delete a provider, click the trash icon next to it. If this provider was in |
| 112 | +use by any workspaces, you will need to update their settings to choose a |
| 113 | +different provider/model. |
| 114 | + |
| 115 | +### Refresh available models |
| 116 | + |
| 117 | +To refresh the list of models available from a provider, in the Providers list, |
| 118 | +click the Manage button next to the provider to refresh, then save it without |
| 119 | +making any changes. |
| 120 | + |
| 121 | +## Configure your client |
| 122 | + |
| 123 | +Configure the OpenAI-compatible API base URL of your AI coding assistant/agent |
| 124 | +to `http://localhost:8989/v1/mux`. If your client requires a model name and/or |
| 125 | +API key, you can enter any values since CodeGate manages the model selection and |
| 126 | +authentication. |
| 127 | + |
| 128 | +For specific instructions, see the |
| 129 | +[integration guide](../integrations/index.mdx) for your client. |
0 commit comments