From 3660230faa6ecfbbf7817a18bd00275d128103b9 Mon Sep 17 00:00:00 2001 From: Andrei Betlen Date: Tue, 7 Nov 2023 22:52:08 -0500 Subject: [PATCH] Fix docs multi-modal docs --- docs/server.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/server.md b/docs/server.md index e7d4bb6..ef75522 100644 --- a/docs/server.md +++ b/docs/server.md @@ -44,10 +44,10 @@ You'll first need to download one of the available multi-modal models in GGUF fo - [llava1.5 7b](https://huggingface.co/mys/ggml_llava-v1.5-7b) - [llava1.5 13b](https://huggingface.co/mys/ggml_llava-v1.5-13b) -Then when you run the server you'll need to also specify the path to the clip model used for image embedding +Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format ```bash -python3 -m llama_cpp.server --model --clip-model-path +python3 -m llama_cpp.server --model --clip-model-path --chat-format llava-1-5 ``` Then you can just use the OpenAI API as normal