docs: Update to fix pip install llama cpp server (#1187)

Without the single quotes, when running the command, an error is printed saying no matching packages found on pypi. Adding the quotes fixes it

$ pip install llama-cpp-python[server]
zsh: no matches found: llama-cpp-python[server]

Co-authored-by: Andrei <>
This commit is contained in:
Aditya Purandare 2024-02-23 15:11:22 +05:30 committed by GitHub
parent 251a8a2cad
commit 52d9d70076
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -505,14 +505,14 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
To install the server package and get started:
pip install llama-cpp-python[server]
pip install 'llama-cpp-python[server]'
python3 -m llama_cpp.server --model models/7B/llama-model.gguf
Similar to Hardware Acceleration section above, you can also install with GPU (cuBLAS) support like this:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python[server]
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
python3 -m llama_cpp.server --model models/7B/llama-model.gguf --n_gpu_layers 35