2023-03-24 22:57:59 +00:00
|
|
|
# 🦙 Python Bindings for `llama.cpp`
|
|
|
|
|
2023-03-24 23:02:36 +00:00
|
|
|
[![PyPI](https://img.shields.io/pypi/v/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
|
|
|
|
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
|
|
|
|
[![PyPI - License](https://img.shields.io/pypi/l/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
|
|
|
|
[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-cpp-python)](https://pypi.org/project/llama-cpp-python/)
|
|
|
|
|
2023-03-24 22:57:59 +00:00
|
|
|
Simple Python bindings for **@ggerganov's** [`llama.cpp`](https://github.com/ggerganov/llama.cpp) library.
|
|
|
|
This package provides:
|
|
|
|
|
|
|
|
- Low-level access to C API via `ctypes` interface.
|
|
|
|
- High-level Python API for text completion
|
|
|
|
- OpenAI-like API
|
|
|
|
- LangChain compatibility
|
|
|
|
|
2023-03-24 23:02:36 +00:00
|
|
|
## Installation
|
|
|
|
|
|
|
|
Install from PyPI:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
pip install llama-cpp-python
|
|
|
|
```
|
|
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
|
|
```python
|
|
|
|
>>> from llama_cpp import Llama
|
|
|
|
>>> llm = Llama(model_path="models/7B/...")
|
|
|
|
>>> output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
|
|
|
|
>>> print(output)
|
|
|
|
{
|
|
|
|
"id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
|
|
|
|
"object": "text_completion",
|
|
|
|
"created": 1679561337,
|
|
|
|
"model": "models/7B/...",
|
|
|
|
"choices": [
|
|
|
|
{
|
|
|
|
"text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
|
|
|
|
"index": 0,
|
|
|
|
"logprobs": None,
|
|
|
|
"finish_reason": "stop"
|
|
|
|
}
|
|
|
|
],
|
|
|
|
"usage": {
|
|
|
|
"prompt_tokens": 14,
|
|
|
|
"completion_tokens": 28,
|
|
|
|
"total_tokens": 42
|
|
|
|
}
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2023-04-01 17:03:56 +00:00
|
|
|
## Development
|
|
|
|
|
|
|
|
```bash
|
|
|
|
git clone git@github.com:abetlen/llama-cpp-python.git
|
|
|
|
git submodule update --init --recursive
|
|
|
|
# Will need to be re-run any time vendor/llama.cpp is updated
|
|
|
|
python3 setup.py develop
|
|
|
|
```
|
2023-03-24 22:57:59 +00:00
|
|
|
|
|
|
|
## API Reference
|
|
|
|
|
|
|
|
::: llama_cpp.Llama
|
|
|
|
options:
|
|
|
|
members:
|
|
|
|
- __init__
|
2023-03-28 09:04:15 +00:00
|
|
|
- tokenize
|
|
|
|
- detokenize
|
2023-04-01 21:29:43 +00:00
|
|
|
- generate
|
2023-04-01 17:04:12 +00:00
|
|
|
- create_embedding
|
|
|
|
- create_completion
|
|
|
|
- __call__
|
2023-04-01 21:29:30 +00:00
|
|
|
- token_bos
|
|
|
|
- token_eos
|
2023-03-24 22:57:59 +00:00
|
|
|
show_root_heading: true
|
|
|
|
|
|
|
|
::: llama_cpp.llama_cpp
|
|
|
|
options:
|
2023-03-24 23:02:36 +00:00
|
|
|
show_if_no_docstring: true
|
|
|
|
|
|
|
|
## License
|
|
|
|
|
|
|
|
This project is licensed under the terms of the MIT license.
|