From 3adc8fb3ae887d385b4a884814f9055c7165f168 Mon Sep 17 00:00:00 2001 From: Andrei Betlen Date: Sun, 7 May 2023 05:10:52 -0400 Subject: [PATCH] Update README to use cli options for server --- README.md | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/README.md b/README.md index 9daca60..9fa3bed 100644 --- a/README.md +++ b/README.md @@ -68,18 +68,9 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl To install the server package and get started: -Linux/MacOS ```bash pip install llama-cpp-python[server] -export MODEL=./models/7B/ggml-model.bin -python3 -m llama_cpp.server -``` - -Windows -```cmd -pip install llama-cpp-python[server] -SET MODEL=..\models\7B\ggml-model.bin -python3 -m llama_cpp.server +python3 -m llama_cpp.server --model models/7B/ggml-model.bin ``` Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.