docs: Update README.md to fix pip install llama cpp server (#1187)
Without the single quotes, when running the command, an error is printed saying no matching packages found on pypi. Adding the quotes fixes it ```bash $ pip install llama-cpp-python[server] zsh: no matches found: llama-cpp-python[server] ``` Co-authored-by: Andrei <abetlen@gmail.com>
This commit is contained in:
parent
251a8a2cad
commit
52d9d70076
1 changed files with 2 additions and 2 deletions
|
@ -505,14 +505,14 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
|
|||
To install the server package and get started:
|
||||
|
||||
```bash
|
||||
pip install llama-cpp-python[server]
|
||||
pip install 'llama-cpp-python[server]'
|
||||
python3 -m llama_cpp.server --model models/7B/llama-model.gguf
|
||||
```
|
||||
|
||||
Similar to Hardware Acceleration section above, you can also install with GPU (cuBLAS) support like this:
|
||||
|
||||
```bash
|
||||
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python[server]
|
||||
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
|
||||
python3 -m llama_cpp.server --model models/7B/llama-model.gguf --n_gpu_layers 35
|
||||
```
|
||||
|
||||
|
|
Loading…
Reference in a new issue