docs: update api.md formatting

This commit is contained in:
Jeffrey Morgan 2023-08-08 15:41:19 -07:00
parent a027a7dd65
commit 34a88cd776

View file

@ -1,376 +1,221 @@
[Documentation Home](./README.md)
# API # API
- [Generate a Prompt](#generate-a-prompt) ## Endpoints
- [Create a Model](#create-a-model)
- [List Local Models](#list-local-models)
- [Copy a Model](#copy-a-model)
- [Delete a Model](#delete-a-model)
- [Pull a Model](#pull-a-model)
## Things to keep in mind when using the API - [Generate a completion](#generate-a-completion)
- [Create a model](#create-a-model)
- [List local models](#list-local-models)
- [Copy a model](#copy-a-model)
- [Delete a model](#delete-a-model)
- [Pull a model](#pull-a-model)
### Model name format ## Conventions
The model name format today is `model:tag`. Some examples are `orca:3b-q4_1` and `llama2:70b`. The tag is optional and if not provided will default to `latest`. The tag is used to identify a specific version. ### Model names
Model names follow a `model:tag` format. Some examples are `orca:3b-q4_1` and `llama2:70b`. The tag is optional and if not provided will default to `latest`. The tag is used to identify a specific version.
### Durations ### Durations
All durations are in nanoseconds. All durations are returned in nanoseconds.
## Generate a Prompt ## Generate a completion
**POST /api/generate** ```
POST /api/generate
```
### Description Generate a response for a given prompt with a provided model. This is a streaming endpoint, so will be a series of responses. The final response object will include statistics and additional data from the request.
**Generate** is the main endpoint that you will use when working with Ollama. This is used to generate a response to a prompt sent to a model. This is a streaming endpoint, so will be a series of responses. The final response will include the context and what is usually seen in the output from verbose mode. ### Parameters
- `model`: (required) the [model name](#model-names)
- `prompt`: the prompt to generate a response for
Advanced parameters:
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `system`: system prompt to (overrides what is defined in the `Modelfile`)
- `template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
### Request ### Request
The **Generate** endpoint takes a JSON object with the following fields:
```JSON
{
"model": "site/namespace/model:tag",
"prompt": "You are a software engineer working on building docs for Ollama.",
"options": {
"temperature": 0.7,
}
}
``` ```
curl -X POST http://localhost:11434/api/generate -d '{
**Options** can include any of the parameters listed in the [Modelfile](./modelfile.mdvalid-parameters-and-values) documentation. The only required parameter is **model**. If no **prompt** is provided, the model will generate a response to an empty prompt. If no **options** are provided, the model will use the default options from the Modelfile of the parent model. "model": "llama2:7b",
"prompt": "Why is the sky blue?"
}'
```
### Response ### Response
The response is a stream of JSON objects with the following fields: A stream of JSON objects:
```JSON ```json
{ {
"model": "modelname", "model": "llama2:7b",
"created_at": "2023-08-04T08:52:19.385406455-07:00" "created_at": "2023-08-04T08:52:19.385406455-07:00",
"response": "the current token", "response": "The",
"done": false "done": false
} }
``` ```
The final response in the stream also includes the context and what is usually seen in the output from verbose mode. For example: The final response in the stream also includes additional data about the generation:
```JSON ```json
{ {
"model":"orca", "model": "llama2:7b",
"created_at": "2023-08-04T19:22:45.499127Z", "created_at": "2023-08-04T19:22:45.499127Z",
"done": true, "done": true,
// total time in nanoseconds spent generating the response
"total_duration": 5589157167, "total_duration": 5589157167,
// time spent in nanoseconds loading the model
"load_duration": 3013701500, "load_duration": 3013701500,
// Sample: how fast tokens were sampled
"sample_count": 114, "sample_count": 114,
"sample_duration": 81442000, "sample_duration": 81442000,
// Prompt stats: how fast the prompt was evaluated
"prompt_eval_count": 46, "prompt_eval_count": 46,
"prompt_eval_duration": 1160282000, "prompt_eval_duration": 1160282000,
// Eval stats: how fast tokens were generated by the model
"eval_count": 113, "eval_count": 113,
"eval_duration": 1325948000 "eval_duration": 1325948000
} }
``` ```
| field | description |
| -------------------- | ------------------------------------------------------- |
| model | the name of the model |
| created_at | the time the response was generated |
| response | the current token |
| done | whether the response is complete |
| total_duration | total time in nanoseconds spent generating the response |
| load_duration | time spent in nanoseconds loading the model |
| sample_count | number of samples generated |
| sample_duration | time spent generating samples |
| prompt_eval_count | number of times the prompt was evaluated |
| prompt_eval_duration | time spent in nanoseconds evaluating the prompt |
| eval_count | number of times the response was evaluated |
| eval_duration | time in nanoseconds spent evaluating the response |
### Example
#### Request
```shell
curl -X POST 'http://localhost:11434/api/generate' -d \
'{
"model": "orca",
"prompt": "why is the sky blue"
}'
```
#### Response
```json
{"model":"orca","created_at":"2023-08-04T19:22:44.085127Z","response":" The","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:44.176425Z","response":" sky","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:44.18883Z","response":" appears","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:44.200852Z","response":" blue","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:44.213644Z","response":" because","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:44.225706Z","response":" of","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:44.237686Z","response":" a","done":false}
...
{"model":"orca","created_at":"2023-08-04T19:22:45.487113Z","response":".","done":false}
{"model":"orca","created_at":"2023-08-04T19:22:45.499127Z","done":true,"total_duration":5589157167,"load_duration":3013701500,"sample_count":114,"sample_duration":81442000,"prompt_eval_count":46,"prompt_eval_duration":1160282000,"eval_count":113,"eval_duration":1325948000}
```
## Create a Model ## Create a Model
**POST /api/create** ```
POST /api/create
```
### Description Create a model from a [`Modelfile`](./modelfile.md)
**Create** takes a path to a Modelfile and creates a model. The Modelfile is documented [here](./modelfile.md). ### Parameters
- `name`: name of the model to create
- `path`: path to the Modelfile
### Request ### Request
The **Create** endpoint takes a JSON object with the following fields: ```
curl -X POST http://localhost:11434/api/create -d '{
```JSON "name": "mario",
{ "path": "~/Modelfile"
"name": "modelname", }'
"path": "absolute path to Modelfile"
}
``` ```
### Response ### Response
The response is a stream of JSON objects that have a single key/value pair for status. For example: A stream of JSON objects. When finished, `status` is `success`
```JSON ```json
{ {
"status": "parsing modelfile" "status": "parsing modelfile"
} }
``` ```
### Example
#### Request
```shell
curl --location --request POST 'http://localhost:11434/api/create' \
--header 'Content-Type: text/plain' \
--data-raw '{
"name": "myCoolModel",
"path": "/Users/matt/ollamamodelfiles/sentiments"
}'
```
#### Response
```JSON
{"status":"parsing modelfile"}
{"status":"looking for model"}
{"status":"creating model template layer"}
{"status":"creating config layer"}
{"status":"using already created layer sha256:e84705205f71dd55be7b24a778f248f0eda9999a125d313358c087e092d83148"}
{"status":"using already created layer sha256:93ca9b3d83dc541f11062c0b994ae66a7b327146f59a9564aafef4a4c15d1ef5"}
{"status":"writing layer sha256:d3fe6fb39620a477da7720c5fa00abe269a018a9675a726320e18122b7142ee7"}
{"status":"writing layer sha256:16cc83359b0395026878b41662f7caef433f5260b5d49a3257312b6417b7d8a8"}
{"status":"writing manifest"}
{"status":"success"}
```
## List Local Models ## List Local Models
**GET /api/tags** ```
GET /api/tags
```
### Description List models that are available locally.
**List** will list out the models that are available locally.
### Request ### Request
The **List** endpoint takes no parameters and is a simple GET request. ```
curl http://localhost:11434/api/tags
```
### Response ### Response
The response is a JSON object with a single key/value pair for models. For example: ```json
```JSON
{ {
"models": [ "models": [
{ {
"name": "modelname:tags", "name": "llama2:7b",
"modified_at": "2023-08-04T08:52:19.385406455-07:00", "modified_at": "2023-08-02T17:02:23.713454393-07:00",
"size": 3791730596
},
{
"name": "llama2:13b",
"modified_at": "2023-08-08T12:08:38.093596297-07:00",
"size": 7323310500 "size": 7323310500
} }
] ]
} }
``` ```
### Example
#### Request
```shell
curl 'http://localhost:11434/api/tags'
```
#### Response
```JSON
{
"models": [
{
"name": "llama2:70b",
"modified_at": "2023-08-04T08:52:19.385406455-07:00",
"size": 38871966966
},
{
"name": "llama2:70b-chat-q4_0",
"modified_at": "2023-08-04T09:21:27.703371485-07:00",
"size": 38871974480
},
{
"name": "midjourney-prompter:latest",
"modified_at": "2023-08-04T08:45:46.399609053-07:00",
"size": 7323311708
},
{
"name": "raycast_orca:3b",
"modified_at": "2023-08-04T06:23:20.10832636-07:00",
"size": 1928446602
},
{
"name": "stablebeluga:13b-q4_K_M",
"modified_at": "2023-08-04T09:48:26.416547463-07:00",
"size": 7865679045
}
]
}
```
## Copy a Model ## Copy a Model
**POST /api/copy** ```
POST /api/copy
```
### Description Copy a model. Creates a model with another name from an existing model.
**Copy** will copy a model from one name to another. This is useful for creating a new model from an existing model. It is often used as the first step to renaming a model.
### Request ### Request
The **Copy** endpoint takes a JSON object with the following fields:
```JSON
{
"source": "modelname",
"destination": "newmodelname"
}
``` ```
curl http://localhost:11434/api/copy -d '{
### Response "source": "llama2:7b",
"destination": "llama2-backup"
There is no response other than a 200 status code.
### Example
#### Request
```shell
curl -X POST 'http://localhost:11434/api/copy' -d \
'{
"source": "MyCoolModel",
"destination": "ADifferentModel"
}' }'
``` ```
#### Response
No response is returned other than a 200 status code.
## Delete a Model ## Delete a Model
**DELETE /api/delete** ```
DELETE /api/delete
```
### Description Delete a model and its data.
**Delete** will delete a model from the local machine. This is useful for cleaning up models that are no longer needed. ### Parameters
- `model`: model name to delete
### Request ### Request
The **Delete** endpoint takes a JSON object with a single key/value pair for modelname. For example:
```JSON
{
"model": "modelname"
}
``` ```
curl -X DELETE http://localhost:11434/api/delete -d '{
### Response "name": "llama2:13b"
No response is returned other than a 200 status code.
### Example
#### Request
```shell
curl -X DELETE 'http://localhost:11434/api/delete' -d \
'{
"name": "adifferentModel"
}' }'
``` ```
#### Response
No response is returned other than a 200 status code.
## Pull a Model ## Pull a Model
**POST /api/pull** ```
POST /api/pull
```
### Description Download a model from a the model registry. Cancelled pulls are resumed from where they left off, and multiple calls to will share the same download progress.
**Pull** will pull a model from a remote registry. This is useful for getting a model from the Ollama registry and in the future from alternate registries. ### Parameters
- `name`: name of the model to pull
### Request ### Request
The **Pull** endpoint takes a JSON object with the following fields: ```
curl -X POST http://localhost:11434/api/pull -d '{
```JSON "name": "llama2:7b"
{ }'
"name": "modelname"
}
``` ```
### Response ### Response
The response is a stream of JSON objects with the following format: ```json
```JSON
{ {
"status": "downloading digestname", "status": "downloading digestname",
"digest": "digestname", "digest": "digestname",
"total": 2142590208 "total": 2142590208
} }
``` ```
### Example
#### Request
```shell
curl -X POST 'http://localhost:11434/api/pull' -d \
'{
"name": "orca:3b-q4_1"
}'
```
#### Response
```JSON
{"status":"pulling manifest"}
{"status":"downloading sha256:63151c63f792939bb4a40b35f37ea06e047c02486399d1742113aaefd0d33e29","digest":"sha256:63151c63f792939bb4a40b35f37ea06e047c02486399d1742113aaefd0d33e29","total":2142590208}
{"status":"downloading sha256:63151c63f792939bb4a40b35f37ea06e047c02486399d1742113aaefd0d33e29","digest":"sha256:63151c63f792939bb4a40b35f37ea06e047c02486399d1742113aaefd0d33e29","total":2142590208,"completed":1048576}
...
{"status":"downloading sha256:20714f2ebe4be44313358bfa58556d783652398ed47f12178914c706c4ad12c4","digest":"sha256:20714f2ebe4be44313358bfa58556d783652398ed47f12178914c706c4ad12c4","total":299}
{"status":"downloading sha256:20714f2ebe4be44313358bfa58556d783652398ed47f12178914c706c4ad12c4","digest":"sha256:20714f2ebe4be44313358bfa58556d783652398ed47f12178914c706c4ad12c4","total":299,"completed":299}
{"status":"verifying sha256 digest"}
{"status":"writing manifest"}
{"status":"success"}
```