Skip to content

Commit 81f235d

Browse files
authored
Fixed llava server doc arguments
1 parent 80f4162 commit 81f235d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/server.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo
6161
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
6262

6363
```bash
64-
python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path> --chat-format llava-1-5
64+
python3 -m llama_cpp.server --model <model_path> --clip_model_path <clip_model_path> --chat_format llava-1-5
6565
```
6666

6767
Then you can just use the OpenAI API as normal
@@ -88,4 +88,4 @@ response = client.chat.completions.create(
8888
],
8989
)
9090
print(response)
91-
```
91+
```

0 commit comments

Comments
 (0)