From 52d9d70076e28cc93a8a6a63bbf8285bc7ed8cb6 Mon Sep 17 00:00:00 2001 From: Aditya Purandare Date: Fri, 23 Feb 2024 15:11:22 +0530 Subject: [PATCH] docs: Update README.md to fix pip install llama cpp server (#1187) Without the single quotes, when running the command, an error is printed saying no matching packages found on pypi. Adding the quotes fixes it ```bash $ pip install llama-cpp-python[server] zsh: no matches found: llama-cpp-python[server] ``` Co-authored-by: Andrei --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 6dbd79d..ccd66ea 100644 --- a/README.md +++ b/README.md @@ -505,14 +505,14 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl To install the server package and get started: ```bash -pip install llama-cpp-python[server] +pip install 'llama-cpp-python[server]' python3 -m llama_cpp.server --model models/7B/llama-model.gguf ``` Similar to Hardware Acceleration section above, you can also install with GPU (cuBLAS) support like this: ```bash -CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python[server] +CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]' python3 -m llama_cpp.server --model models/7B/llama-model.gguf --n_gpu_layers 35 ```