Updated README.md instructions on how to use *_simple/Dockerfiles
This commit is contained in:
parent
0e0c9bb978
commit
483b6ba53a
6 changed files with 19 additions and 8 deletions
|
@ -1,10 +1,21 @@
|
||||||
# Dockerfiles for building the llama-cpp-python server
|
# Simple Dockerfiles for building the llama-cpp-python server with external model bin files
|
||||||
- `Dockerfile.openblas_simple` - a simple Dockerfile for non-GPU OpenBLAS
|
- `./openblas_simple/Dockerfile` - a simple Dockerfile for non-GPU OpenBLAS, where the model is located outside the Docker image
|
||||||
- `Dockerfile.cuda_simple` - a simple Dockerfile for CUDA accelerated CuBLAS
|
- `cd ./openblas_simple`
|
||||||
|
- `docker build -t openblas_simple .`
|
||||||
|
- `docker run -e USE_MLOCK=0 -e MODEL=/var/model/<model-path> -v <model-root-path>:/var/model -t openblas_simple`
|
||||||
|
where `<model-root-path>/<model-path>` is the full path to the model file on the Docker host system.
|
||||||
|
- `./cuda_simple/Dockerfile` - a simple Dockerfile for CUDA accelerated CuBLAS, where the model is located outside the Docker image
|
||||||
|
- `cd ./cuda_simple`
|
||||||
|
- `docker build -t cuda_simple .`
|
||||||
|
- `docker run -e USE_MLOCK=0 -e MODEL=/var/model/<model-path> -v <model-root-path>:/var/model -t cuda_simple`
|
||||||
|
where `<model-root-path>/<model-path>` is the full path to the model file on the Docker host system.
|
||||||
|
|
||||||
|
# "Bot-in-a-box" - a method to build a Docker image by choosing a model to be downloaded and loading into a Docker image
|
||||||
|
- `cd ./auto_docker`:
|
||||||
- `hug_model.py` - a Python utility for interactively choosing and downloading the latest `5_1` quantized models from [huggingface.co/TheBloke]( https://huggingface.co/TheBloke)
|
- `hug_model.py` - a Python utility for interactively choosing and downloading the latest `5_1` quantized models from [huggingface.co/TheBloke]( https://huggingface.co/TheBloke)
|
||||||
- `Dockerfile` - a single OpenBLAS and CuBLAS combined Dockerfile that automatically installs a previously downloaded model `model.bin`
|
- `Dockerfile` - a single OpenBLAS and CuBLAS combined Dockerfile that automatically installs a previously downloaded model `model.bin`
|
||||||
|
|
||||||
# Get model from Hugging Face
|
## Get model from Hugging Face
|
||||||
`python3 ./hug_model.py`
|
`python3 ./hug_model.py`
|
||||||
|
|
||||||
You should now have a model in the current directory and `model.bin` symlinked to it for the subsequent Docker build and copy step. e.g.
|
You should now have a model in the current directory and `model.bin` symlinked to it for the subsequent Docker build and copy step. e.g.
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
ARG CUDA_IMAGE="12.1.1-devel-ubuntu22.04"
|
ARG CUDA_IMAGE="12.1.1-devel-ubuntu22.04"
|
||||||
FROM ${CUDA_IMAGE}
|
FROM nvidia/cuda:${CUDA_IMAGE}
|
||||||
|
|
||||||
# We need to set the host to 0.0.0.0 to allow outside access
|
# We need to set the host to 0.0.0.0 to allow outside access
|
||||||
ENV HOST 0.0.0.0
|
ENV HOST 0.0.0.0
|
||||||
|
@ -10,7 +10,7 @@ COPY . .
|
||||||
RUN apt update && apt install -y python3 python3-pip
|
RUN apt update && apt install -y python3 python3-pip
|
||||||
RUN python3 -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette
|
RUN python3 -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette
|
||||||
|
|
||||||
RUN LLAMA_CUBLAS=1 python3 setup.py develop
|
RUN LLAMA_CUBLAS=1 pip install llama-cpp-python
|
||||||
|
|
||||||
# Run the server
|
# Run the server
|
||||||
CMD python3 -m llama_cpp.server
|
CMD python3 -m llama_cpp.server
|
|
@ -9,7 +9,7 @@ COPY . .
|
||||||
RUN apt update && apt install -y libopenblas-dev ninja-build build-essential
|
RUN apt update && apt install -y libopenblas-dev ninja-build build-essential
|
||||||
RUN python -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette
|
RUN python -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette
|
||||||
|
|
||||||
RUN LLAMA_OPENBLAS=1 python3 setup.py develop
|
RUN LLAMA_OPENBLAS=1 pip install llama_cpp_python --verbose
|
||||||
|
|
||||||
# Run the server
|
# Run the server
|
||||||
CMD python3 -m llama_cpp.server
|
CMD python3 -m llama_cpp.server
|
Loading…
Reference in a new issue