Update instruction to download GGUF model (#783)
Co-authored-by: john.shen <john.shen@bioclinica.com>
This commit is contained in:
parent
305482bd41
commit
b76724cddc
1 changed files with 6 additions and 6 deletions
|
@ -38,19 +38,19 @@ llama-cpp-python 0.1.68
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**(5) Download a v3 ggml model**
|
**(5) Download a v3 gguf v2 model**
|
||||||
- **ggmlv3**
|
- **ggufv2**
|
||||||
- file name ends with **q4_0.bin** - indicating it is 4bit quantized, with quantisation method 0
|
- file name ends with **Q4_0.gguf** - indicating it is 4bit quantized, with quantisation method 0
|
||||||
|
|
||||||
https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GGML
|
https://huggingface.co/TheBloke/CodeLlama-7B-GGUF
|
||||||
|
|
||||||
|
|
||||||
**(6) run the llama-cpp-python API server with MacOS Metal GPU support**
|
**(6) run the llama-cpp-python API server with MacOS Metal GPU support**
|
||||||
```
|
```
|
||||||
# config your ggml model path
|
# config your ggml model path
|
||||||
# make sure it is ggml v3
|
# make sure it is gguf v2
|
||||||
# make sure it is q4_0
|
# make sure it is q4_0
|
||||||
export MODEL=[path to your llama.cpp ggml models]]/[ggml-model-name]]q4_0.bin
|
export MODEL=[path to your llama.cpp ggml models]]/[ggml-model-name]]Q4_0.gguf
|
||||||
python3 -m llama_cpp.server --model $MODEL --n_gpu_layers 1
|
python3 -m llama_cpp.server --model $MODEL --n_gpu_layers 1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue