From fb1f956a279a9ae7528dc77ea313a0263ec6104b Mon Sep 17 00:00:00 2001 From: Kevin Jung Date: Wed, 8 Nov 2023 23:53:00 -0500 Subject: [PATCH] Fix server doc arguments (#892) --- docs/server.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/server.md b/docs/server.md index 030c591..5db24c8 100644 --- a/docs/server.md +++ b/docs/server.md @@ -45,7 +45,7 @@ You'll first need to download one of the available function calling models in GG Then when you run the server you'll need to also specify the `functionary-7b-v1` chat_format ```bash -python3 -m llama_cpp.server --model --chat-format functionary +python3 -m llama_cpp.server --model --chat_format functionary ``` ### Multimodal Models @@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format ```bash -python3 -m llama_cpp.server --model --clip-model-path --chat-format llava-1-5 +python3 -m llama_cpp.server --model --clip_model_path --chat_format llava-1-5 ``` Then you can just use the OpenAI API as normal