ollama/server
2023-06-23 14:47:57 -04:00
..
README.md llama server wrapper 2023-06-23 13:10:13 -04:00
requirements.txt llama server wrapper 2023-06-23 13:10:13 -04:00
server.py load and unload model endpoints 2023-06-23 14:47:57 -04:00

Server

🙊

Installation

If using Apple silicon, you need a Python version that supports arm64:

wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh

Get the dependencies:

pip install llama-cpp-python
pip install -r requirements.txt

Running

Put your model in models/ and run:

python server.py

API

POST /generate

model: string - The name of the model to use in the models folder. prompt: string - The prompt to use.