Merge remote-tracking branch 'origin/main'

This commit is contained in:
MillionthOdin16 2023-04-05 17:51:43 -04:00
commit b9b6dfd23f
2 changed files with 54 additions and 4 deletions

View file

@ -15,7 +15,7 @@ This package provides:
- OpenAI-like API - OpenAI-like API
- LangChain compatibility - LangChain compatibility
# Installation ## Installation
Install from PyPI: Install from PyPI:
@ -23,7 +23,7 @@ Install from PyPI:
pip install llama-cpp-python pip install llama-cpp-python
``` ```
# Usage ## High-level API
```python ```python
>>> from llama_cpp import Llama >>> from llama_cpp import Llama
@ -51,6 +51,27 @@ pip install llama-cpp-python
} }
``` ```
## Web Server
`llama-cpp-python` offers a web server which aims to act as a drop-in replacement for the OpenAI API.
This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
```bash
pip install llama-cpp-python[server]
export MODEL=./models/7B
python3 -m llama_cpp.server
```
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
## Low-level API
The low-level API is a direct `ctypes` binding to the C API provided by `llama.cpp`.
The entire API can be found in [llama_cpp/llama_cpp.py](https://github.com/abetlen/llama-cpp-python/blob/master/llama_cpp/llama_cpp.py) and should mirror [llama.h](https://github.com/ggerganov/llama.cpp/blob/master/llama.h).
# Documentation # Documentation
Documentation is available at [https://abetlen.github.io/llama-cpp-python](https://abetlen.github.io/llama-cpp-python). Documentation is available at [https://abetlen.github.io/llama-cpp-python](https://abetlen.github.io/llama-cpp-python).

View file

@ -1,5 +1,9 @@
# 🦙 Python Bindings for `llama.cpp` # Getting Started
## 🦙 Python Bindings for `llama.cpp`
[![Documentation](https://img.shields.io/badge/docs-passing-green.svg)](https://abetlen.github.io/llama-cpp-python)
[![Tests](https://github.com/abetlen/llama-cpp-python/actions/workflows/test.yaml/badge.svg?branch=main)](https://github.com/abetlen/llama-cpp-python/actions/workflows/test.yaml)
[![PyPI](https://img.shields.io/pypi/v/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/) [![PyPI](https://img.shields.io/pypi/v/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
[![PyPI - License](https://img.shields.io/pypi/l/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/) [![PyPI - License](https://img.shields.io/pypi/l/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
@ -21,7 +25,7 @@ Install from PyPI:
pip install llama-cpp-python pip install llama-cpp-python
``` ```
## Usage ## High-level API
```python ```python
>>> from llama_cpp import Llama >>> from llama_cpp import Llama
@ -49,8 +53,33 @@ pip install llama-cpp-python
} }
``` ```
## Web Server
`llama-cpp-python` offers a web server which aims to act as a drop-in replacement for the OpenAI API.
This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
```bash
pip install llama-cpp-python[server]
export MODEL=./models/7B
python3 -m llama_cpp.server
```
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
## Low-level API
The low-level API is a direct `ctypes` binding to the C API provided by `llama.cpp`.
The entire API can be found in [llama_cpp/llama_cpp.py](https://github.com/abetlen/llama-cpp-python/blob/master/llama_cpp/llama_cpp.py) and should mirror [llama.h](https://github.com/ggerganov/llama.cpp/blob/master/llama.h).
## Development ## Development
This package is under active development and I welcome any contributions.
To get started, clone the repository and install the package in development mode:
```bash ```bash
git clone git@github.com:abetlen/llama-cpp-python.git git clone git@github.com:abetlen/llama-cpp-python.git
git submodule update --init --recursive git submodule update --init --recursive