We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 80f4162 commit 81f235dCopy full SHA for 81f235d
docs/server.md
@@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo
61
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
62
63
```bash
64
-python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path> --chat-format llava-1-5
+python3 -m llama_cpp.server --model <model_path> --clip_model_path <clip_model_path> --chat_format llava-1-5
65
```
66
67
Then you can just use the OpenAI API as normal
@@ -88,4 +88,4 @@ response = client.chat.completions.create(
88
],
89
)
90
print(response)
91
-```
+```
0 commit comments