Update README.md
add windows server commad
This commit is contained in:
parent
5f583b0179
commit
952ba9ecaf
1 changed files with 8 additions and 0 deletions
|
@ -64,12 +64,20 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
|
|||
|
||||
To install the server package and get started:
|
||||
|
||||
Linux
|
||||
```bash
|
||||
pip install llama-cpp-python[server]
|
||||
export MODEL=./models/7B/ggml-model.bin
|
||||
python3 -m llama_cpp.server
|
||||
```
|
||||
|
||||
Windows
|
||||
```cmd
|
||||
pip install llama-cpp-python[server]
|
||||
SET MODEL=\models\7B\ggml-model.bin
|
||||
python3 -m llama_cpp.server
|
||||
```
|
||||
|
||||
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
|
||||
|
||||
## Docker image
|
||||
|
|
Loading…
Reference in a new issue