From 952ba9ecaf7a78be1844a1c533d6f6f580b92833 Mon Sep 17 00:00:00 2001 From: Thomas Neu <81517187+th-neu@users.noreply.github.com> Date: Fri, 5 May 2023 14:21:57 +0200 Subject: [PATCH 1/3] Update README.md add windows server commad --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index a8afa67..ee6ec2d 100644 --- a/README.md +++ b/README.md @@ -64,12 +64,20 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl To install the server package and get started: +Linux ```bash pip install llama-cpp-python[server] export MODEL=./models/7B/ggml-model.bin python3 -m llama_cpp.server ``` +Windows +```cmd +pip install llama-cpp-python[server] +SET MODEL=\models\7B\ggml-model.bin +python3 -m llama_cpp.server +``` + Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation. ## Docker image From eb54e30f343251767ec0a2cb10da2684b896718f Mon Sep 17 00:00:00 2001 From: Thomas Neu <81517187+th-neu@users.noreply.github.com> Date: Fri, 5 May 2023 14:22:41 +0200 Subject: [PATCH 2/3] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ee6ec2d..d24bad5 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,7 @@ python3 -m llama_cpp.server Windows ```cmd pip install llama-cpp-python[server] -SET MODEL=\models\7B\ggml-model.bin +SET MODEL=..\models\7B\ggml-model.bin python3 -m llama_cpp.server ``` From 22c3056b2a8d19f2c5ce9ab817e312da21e66d9c Mon Sep 17 00:00:00 2001 From: Thomas Neu <81517187+th-neu@users.noreply.github.com> Date: Fri, 5 May 2023 18:40:00 +0200 Subject: [PATCH 3/3] Update README.md added MacOS --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d24bad5..c46fa11 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl To install the server package and get started: -Linux +Linux/MacOS ```bash pip install llama-cpp-python[server] export MODEL=./models/7B/ggml-model.bin