2023-08-04 18:55:00 +00:00
# API
2023-08-08 22:41:19 +00:00
## Endpoints
- [Generate a completion ](#generate-a-completion )
- [Create a model ](#create-a-model )
- [List local models ](#list-local-models )
- [Copy a model ](#copy-a-model )
- [Delete a model ](#delete-a-model )
- [Pull a model ](#pull-a-model )
2023-08-10 22:56:59 +00:00
- [Generate embeddings ](#generate-embeddings )
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
## Conventions
2023-08-07 20:17:16 +00:00
2023-08-08 22:41:19 +00:00
### Model names
2023-08-07 20:17:16 +00:00
2023-08-08 22:41:19 +00:00
Model names follow a `model:tag` format. Some examples are `orca:3b-q4_1` and `llama2:70b` . The tag is optional and if not provided will default to `latest` . The tag is used to identify a specific version.
2023-08-07 20:17:16 +00:00
### Durations
2023-08-08 22:41:19 +00:00
All durations are returned in nanoseconds.
2023-08-07 20:17:16 +00:00
2023-08-08 22:41:19 +00:00
## Generate a completion
2023-08-04 19:38:58 +00:00
2023-08-08 22:41:19 +00:00
```
POST /api/generate
```
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so will be a series of responses. The final response object will include statistics and additional data from the request.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 19:30:23 +00:00
2023-08-08 22:41:19 +00:00
- `model` : (required) the [model name ](#model-names )
- `prompt` : the prompt to generate a response for
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
Advanced parameters:
2023-08-04 19:27:47 +00:00
2023-08-08 22:41:19 +00:00
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
- `system` : system prompt to (overrides what is defined in the `Modelfile` )
- `template` : the full prompt or prompt template (overrides what is defined in the `Modelfile` )
2023-08-14 18:23:24 +00:00
- `context` : the context parameter returned from a previous request to `/generate` , this can be used to keep a short conversational memory
2023-08-08 22:41:19 +00:00
### Request
2023-08-04 19:27:47 +00:00
2023-08-08 22:41:19 +00:00
```
curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama2:7b",
"prompt": "Why is the sky blue?"
}'
```
2023-08-05 00:41:28 +00:00
2023-08-04 23:08:11 +00:00
### Response
2023-08-08 22:41:19 +00:00
A stream of JSON objects:
2023-08-04 19:27:47 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:27:47 +00:00
{
2023-08-08 22:41:19 +00:00
"model": "llama2:7b",
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"response": "The",
2023-08-04 19:27:47 +00:00
"done": false
}
```
2023-08-08 22:41:19 +00:00
The final response in the stream also includes additional data about the generation:
2023-08-04 19:38:58 +00:00
2023-08-08 22:46:05 +00:00
- `total_duration` : time spent generating the response
- `load_duration` : time spent in nanoseconds loading the model
- `sample_count` : number of samples generated
- `sample_duration` : time spent generating samples
- `prompt_eval_count` : number of tokens in the prompt
- `prompt_eval_duration` : time spent in nanoseconds evaluating the prompt
- `eval_count` : number of tokens the response
- `eval_duration` : time in nanoseconds spent generating the response
2023-08-14 18:23:24 +00:00
- `context` : an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
2023-08-08 22:46:05 +00:00
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration` .
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:38:58 +00:00
{
2023-08-08 22:41:19 +00:00
"model": "llama2:7b",
"created_at": "2023-08-04T19:22:45.499127Z",
2023-08-14 18:23:24 +00:00
"context": [1, 2, 3],
2023-08-08 22:41:19 +00:00
"done": true,
"total_duration": 5589157167,
"load_duration": 3013701500,
"sample_count": 114,
"sample_duration": 81442000,
"prompt_eval_count": 46,
"prompt_eval_duration": 1160282000,
"eval_count": 113,
"eval_duration": 1325948000
}
2023-08-04 19:38:58 +00:00
```
2023-08-04 18:55:00 +00:00
## Create a Model
2023-08-04 19:38:58 +00:00
2023-08-08 22:41:19 +00:00
```
POST /api/create
```
Create a model from a [`Modelfile` ](./modelfile.md )
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `name` : name of the model to create
- `path` : path to the Modelfile
2023-08-04 23:08:11 +00:00
### Request
2023-08-08 22:41:19 +00:00
```
curl -X POST http://localhost:11434/api/create -d '{
"name": "mario",
"path": "~/Modelfile"
}'
2023-08-04 23:08:11 +00:00
```
### Response
2023-08-08 22:41:19 +00:00
A stream of JSON objects. When finished, `status` is `success`
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 23:08:11 +00:00
{
"status": "parsing modelfile"
}
```
2023-08-08 22:41:19 +00:00
## List Local Models
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
GET /api/tags
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
List models that are available locally.
2023-08-04 23:08:11 +00:00
### Request
2023-08-08 22:41:19 +00:00
```
curl http://localhost:11434/api/tags
```
2023-08-04 23:08:11 +00:00
### Response
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:27:47 +00:00
{
"models": [
{
2023-08-08 22:41:19 +00:00
"name": "llama2:7b",
"modified_at": "2023-08-02T17:02:23.713454393-07:00",
"size": 3791730596
},
{
"name": "llama2:13b",
"modified_at": "2023-08-08T12:08:38.093596297-07:00",
2023-08-07 14:33:05 +00:00
"size": 7323310500
2023-08-04 19:27:47 +00:00
}
]
2023-08-04 23:08:11 +00:00
}
```
2023-08-08 22:41:19 +00:00
## Copy a Model
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
POST /api/copy
2023-08-04 19:27:47 +00:00
```
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
Copy a model. Creates a model with another name from an existing model.
2023-08-04 23:08:11 +00:00
### Request
```
2023-08-08 22:41:19 +00:00
curl http://localhost:11434/api/copy -d '{
"source": "llama2:7b",
"destination": "llama2-backup"
2023-08-04 23:08:11 +00:00
}'
```
2023-08-04 19:27:47 +00:00
## Delete a Model
2023-08-04 19:38:58 +00:00
2023-08-08 22:41:19 +00:00
```
DELETE /api/delete
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
Delete a model and its data.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `model` : model name to delete
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Request
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
```
curl -X DELETE http://localhost:11434/api/delete -d '{
"name": "llama2:13b"
2023-08-04 23:08:11 +00:00
}'
```
2023-08-04 18:55:00 +00:00
## Pull a Model
2023-08-04 19:38:58 +00:00
2023-08-08 22:41:19 +00:00
```
POST /api/pull
```
Download a model from a the model registry. Cancelled pulls are resumed from where they left off, and multiple calls to will share the same download progress.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `name` : name of the model to pull
2023-08-04 23:08:11 +00:00
### Request
2023-08-08 22:41:19 +00:00
```
curl -X POST http://localhost:11434/api/pull -d '{
"name": "llama2:7b"
}'
2023-08-04 23:08:11 +00:00
```
### Response
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 23:08:11 +00:00
{
2023-08-08 22:41:19 +00:00
"status": "downloading digestname",
"digest": "digestname",
"total": 2142590208
2023-08-04 23:08:11 +00:00
}
```
2023-08-10 22:56:59 +00:00
## Generate Embeddings
```
POST /api/embeddings
```
Generate embeddings from a model
### Parameters
- `model` : name of model to generate embeddings from
- `prompt` : text to generate embeddings for
### Request
```
curl -X POST http://localhost:11434/api/embeddings -d '{
"model": "llama2:7b",
"prompt": "Here is an article about llamas..."
}'
```
### Response
```json
{
"embeddings": [
0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
]
}
```