2023-08-04 18:55:00 +00:00
# API
2023-08-08 22:41:19 +00:00
## Endpoints
- [Generate a completion ](#generate-a-completion )
2023-09-14 15:51:26 +00:00
- [Create a Model ](#create-a-model )
- [List Local Models ](#list-local-models )
- [Show Model Information ](#show-model-information )
- [Copy a Model ](#copy-a-model )
- [Delete a Model ](#delete-a-model )
- [Pull a Model ](#pull-a-model )
- [Push a Model ](#push-a-model )
- [Generate Embeddings ](#generate-embeddings )
2023-08-08 22:41:19 +00:00
## Conventions
2023-08-07 20:17:16 +00:00
2023-08-08 22:41:19 +00:00
### Model names
2023-08-07 20:17:16 +00:00
2023-09-14 15:51:26 +00:00
Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b` . The tag is optional and, if not provided, will default to `latest` . The tag is used to identify a specific version.
2023-08-07 20:17:16 +00:00
### Durations
2023-08-08 22:41:19 +00:00
All durations are returned in nanoseconds.
2023-08-07 20:17:16 +00:00
2023-09-30 04:45:52 +00:00
### Streaming responses
2023-12-05 19:57:33 +00:00
Certain endpoints stream responses as JSON objects.
2023-09-30 04:45:52 +00:00
2023-08-08 22:41:19 +00:00
## Generate a completion
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
POST /api/generate
```
2023-08-04 18:55:00 +00:00
2023-12-05 19:57:33 +00:00
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 19:30:23 +00:00
2023-08-08 22:41:19 +00:00
- `model` : (required) the [model name ](#model-names )
- `prompt` : the prompt to generate a response for
2023-08-04 23:08:11 +00:00
2023-10-11 16:54:27 +00:00
Advanced parameters (optional):
2023-08-04 19:27:47 +00:00
2023-11-14 21:12:30 +00:00
- `format` : the format to return a response in. Currently the only accepted value is `json`
2023-08-08 22:41:19 +00:00
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
2023-12-04 23:01:06 +00:00
- `system` : system prompt to (overrides what is defined in the `Modelfile` )
2023-12-05 05:16:27 +00:00
- `template` : the full prompt or prompt template (overrides what is defined in the `Modelfile` )
- `context` : the context parameter returned from a previous request to `/generate` , this can be used to keep a short conversational memory
2023-10-31 20:11:33 +00:00
- `stream` : if `false` the response will be returned as a single response object, rather than a stream of objects
2023-12-05 19:57:33 +00:00
- `raw` : if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API.
2023-08-08 22:41:19 +00:00
2023-11-10 00:44:02 +00:00
### JSON mode
2023-11-19 03:59:26 +00:00
Enable JSON mode by setting the `format` parameter to `json` . This will structure the response as valid JSON. See the JSON mode [example ](#request-json-mode ) below.
> Note: it's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
2023-11-10 00:44:02 +00:00
2023-11-03 14:57:00 +00:00
### Examples
2023-12-09 00:02:07 +00:00
#### Request
2023-08-04 19:27:47 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2023-11-10 00:44:02 +00:00
"model": "llama2",
2023-08-08 22:41:19 +00:00
"prompt": "Why is the sky blue?"
}'
```
2023-08-05 00:41:28 +00:00
2023-11-03 14:57:00 +00:00
#### Response
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
A stream of JSON objects is returned:
2023-08-04 19:27:47 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:27:47 +00:00
{
2023-11-10 00:44:02 +00:00
"model": "llama2",
2023-08-08 22:41:19 +00:00
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"response": "The",
2023-08-04 19:27:47 +00:00
"done": false
}
```
2023-08-08 22:41:19 +00:00
The final response in the stream also includes additional data about the generation:
2023-08-04 19:38:58 +00:00
2023-08-08 22:46:05 +00:00
- `total_duration` : time spent generating the response
- `load_duration` : time spent in nanoseconds loading the model
- `sample_count` : number of samples generated
- `sample_duration` : time spent generating samples
- `prompt_eval_count` : number of tokens in the prompt
- `prompt_eval_duration` : time spent in nanoseconds evaluating the prompt
- `eval_count` : number of tokens the response
- `eval_duration` : time in nanoseconds spent generating the response
2023-12-05 05:16:27 +00:00
- `context` : an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
2023-10-11 16:54:27 +00:00
- `response` : empty if the response was streamed, if not streamed, this will contain the full response
2023-08-08 22:46:05 +00:00
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration` .
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:38:58 +00:00
{
2023-11-10 00:44:02 +00:00
"model": "llama2",
2023-08-08 22:41:19 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
2023-10-11 16:54:27 +00:00
"response": "",
2023-08-14 18:23:24 +00:00
"context": [1, 2, 3],
2023-08-08 22:41:19 +00:00
"done": true,
"total_duration": 5589157167,
"load_duration": 3013701500,
"sample_count": 114,
"sample_duration": 81442000,
"prompt_eval_count": 46,
"prompt_eval_duration": 1160282000,
"eval_count": 113,
"eval_duration": 1325948000
}
2023-08-04 19:38:58 +00:00
```
2023-11-10 00:44:02 +00:00
#### Request (No streaming)
2023-11-03 14:57:00 +00:00
2023-12-05 19:57:33 +00:00
A response can be recieved in one reply when streaming is off.
2023-11-03 14:57:00 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2023-11-17 12:17:51 +00:00
"model": "llama2",
2023-11-03 14:57:00 +00:00
"prompt": "Why is the sky blue?",
"stream": false
}'
```
#### Response
2023-10-31 20:11:33 +00:00
If `stream` is set to `false` , the response will be a single JSON object:
```json
{
2023-11-17 12:17:51 +00:00
"model": "llama2",
2023-10-31 20:11:33 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "The sky is blue because it is the color of the sky.",
"context": [1, 2, 3],
"done": true,
"total_duration": 5589157167,
"load_duration": 3013701500,
"sample_count": 114,
"sample_duration": 81442000,
"prompt_eval_count": 46,
"prompt_eval_duration": 1160282000,
"eval_count": 13,
"eval_duration": 1325948000
}
```
2023-12-05 19:57:33 +00:00
#### Request (Raw Mode)
2023-11-08 22:05:02 +00:00
2023-12-05 19:57:33 +00:00
In some cases you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable formatting.
2023-11-08 22:05:02 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2023-11-08 22:05:02 +00:00
"model": "mistral",
"prompt": "[INST] why is the sky blue? [/INST]",
"raw": true,
"stream": false
}'
```
#### Response
```json
{
"model": "mistral",
"created_at": "2023-11-03T15:36:02.583064Z",
"response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
2023-12-05 19:57:33 +00:00
"context": [1, 2, 3],
2023-11-08 22:05:02 +00:00
"done": true,
"total_duration": 14648695333,
"load_duration": 3302671417,
"prompt_eval_count": 14,
"prompt_eval_duration": 286243000,
"eval_count": 129,
"eval_duration": 10931424000
}
```
2023-11-10 00:44:02 +00:00
#### Request (JSON mode)
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2023-11-10 00:44:02 +00:00
"model": "llama2",
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
}'
```
#### Response
```json
{
"model": "llama2",
"created_at": "2023-11-09T21:07:55.186497Z",
"response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
"done": true,
"total_duration": 4661289125,
"load_duration": 1714434500,
"prompt_eval_count": 36,
"prompt_eval_duration": 264132000,
"eval_count": 75,
"eval_duration": 2112149000
}
```
The value of `response` will be a string containing JSON similar to:
```json
{
"morning": {
"color": "blue"
},
"noon": {
"color": "blue-gray"
},
"afternoon": {
"color": "warm gray"
},
"evening": {
"color": "orange"
}
}
```
#### Request (With options)
2023-11-09 00:44:36 +00:00
If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2023-11-17 12:17:51 +00:00
"model": "llama2",
2023-11-09 00:44:36 +00:00
"prompt": "Why is the sky blue?",
"stream": false,
"options": {
"num_keep": 5,
"seed": 42,
"num_predict": 100,
"top_k": 20,
"top_p": 0.9,
"tfs_z": 0.5,
"typical_p": 0.7,
"repeat_last_n": 33,
"temperature": 0.8,
"repeat_penalty": 1.2,
"presence_penalty": 1.5,
"frequency_penalty": 1.0,
"mirostat": 1,
"mirostat_tau": 0.8,
"mirostat_eta": 0.6,
"penalize_newline": true,
"stop": ["\n", "user:"],
"numa": false,
2023-12-10 15:53:38 +00:00
"num_ctx": 1024,
2023-11-09 00:44:36 +00:00
"num_batch": 2,
"num_gqa": 1,
"num_gpu": 1,
"main_gpu": 0,
"low_vram": false,
"f16_kv": true,
"logits_all": false,
"vocab_only": false,
"use_mmap": true,
"use_mlock": false,
"embedding_only": false,
"rope_frequency_base": 1.1,
"rope_frequency_scale": 0.8,
"num_thread": 8
2023-12-10 15:53:38 +00:00
}
2023-11-09 00:44:36 +00:00
}'
```
#### Response
```json
{
2023-11-17 12:17:51 +00:00
"model": "llama2",
2023-11-09 00:44:36 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "The sky is blue because it is the color of the sky.",
"done": true,
"total_duration": 5589157167,
"load_duration": 3013701500,
"sample_count": 114,
"sample_duration": 81442000,
"prompt_eval_count": 46,
"prompt_eval_duration": 1160282000,
"eval_count": 13,
"eval_duration": 1325948000
}
```
2023-12-07 17:41:56 +00:00
## Send Chat Messages (coming in 0.1.14)
2023-12-06 20:10:20 +00:00
2023-12-05 19:57:33 +00:00
```shell
POST /api/chat
```
Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
### Parameters
- `model` : (required) the [model name ](#model-names )
- `messages` : the messages of the chat, this can be used to keep a chat memory
Advanced parameters (optional):
- `format` : the format to return a response in. Currently the only accepted value is `json`
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
- `template` : the full prompt or prompt template (overrides what is defined in the `Modelfile` )
- `stream` : if `false` the response will be returned as a single response object, rather than a stream of objects
### Examples
#### Request
2023-12-06 20:10:20 +00:00
2023-12-05 19:57:33 +00:00
Send a chat message with a streaming response.
```shell
2023-12-06 20:10:20 +00:00
curl http://localhost:11434/api/chat -d '{
2023-12-05 19:57:33 +00:00
"model": "llama2",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
]
}'
```
#### Response
A stream of JSON objects is returned:
```json
{
"model": "llama2",
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"message": {
"role": "assisant",
"content": "The"
},
"done": false
}
```
Final response:
```json
{
"model": "llama2",
"created_at": "2023-08-04T19:22:45.499127Z",
"done": true,
"total_duration": 5589157167,
"load_duration": 3013701500,
"sample_count": 114,
"sample_duration": 81442000,
"prompt_eval_count": 46,
"prompt_eval_duration": 1160282000,
"eval_count": 113,
"eval_duration": 1325948000
}
```
#### Request (With History)
2023-12-06 20:10:20 +00:00
2023-12-05 19:57:33 +00:00
Send a chat message with a conversation history.
```shell
2023-12-06 20:10:20 +00:00
curl http://localhost:11434/api/chat -d '{
2023-12-05 19:57:33 +00:00
"model": "llama2",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
},
{
"role": "assistant",
"content": "due to rayleigh scattering."
},
{
"role": "user",
"content": "how is that different than mie scattering?"
}
]
}'
```
#### Response
A stream of JSON objects is returned:
```json
{
"model": "llama2",
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"message": {
"role": "assisant",
"content": "The"
},
"done": false
}
```
Final response:
```json
{
"model": "llama2",
"created_at": "2023-08-04T19:22:45.499127Z",
"done": true,
"total_duration": 5589157167,
"load_duration": 3013701500,
"sample_count": 114,
"sample_duration": 81442000,
"prompt_eval_count": 46,
"prompt_eval_duration": 1160282000,
"eval_count": 113,
"eval_duration": 1325948000
}
```
2023-08-04 18:55:00 +00:00
## Create a Model
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
POST /api/create
```
2023-11-14 22:44:10 +00:00
Create a model from a [`Modelfile` ](./modelfile.md ). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path` . This is a requirement for remote create. Remote model creation should also create any file blobs, fields such as `FROM` and `ADAPTER` , explicitly with the server using [Create a Blob ](#create-a-blob ) and the value to the path indicated in the response.
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `name` : name of the model to create
2023-11-21 20:32:05 +00:00
- `modelfile` (optional): contents of the Modelfile
2023-10-31 20:11:33 +00:00
- `stream` : (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
2023-11-21 20:32:05 +00:00
- `path` (optional): path to the Modelfile
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/create -d '{
2023-08-08 22:41:19 +00:00
"name": "mario",
2023-11-17 14:50:38 +00:00
"modelfile": "FROM llama2\nSYSTEM You are mario from Super Mario Bros."
2023-08-08 22:41:19 +00:00
}'
2023-08-04 23:08:11 +00:00
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
A stream of JSON objects. When finished, `status` is `success` .
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 23:08:11 +00:00
{
"status": "parsing modelfile"
}
```
2023-11-15 23:22:12 +00:00
### Check if a Blob Exists
```shell
HEAD /api/blobs/:digest
```
Check if a blob is known to the server.
#### Query Parameters
- `digest` : the SHA256 digest of the blob
#### Examples
##### Request
```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```
##### Response
Return 200 OK if the blob exists, 404 Not Found if it does not.
2023-11-15 19:01:32 +00:00
### Create a Blob
2023-11-14 22:44:10 +00:00
```shell
POST /api/blobs/:digest
```
Create a blob from a file. Returns the server file path.
2023-11-15 19:01:32 +00:00
#### Query Parameters
2023-11-14 22:44:10 +00:00
- `digest` : the expected SHA256 digest of the file
2023-11-15 19:01:32 +00:00
#### Examples
2023-11-14 22:44:10 +00:00
2023-11-15 23:22:12 +00:00
##### Request
2023-11-14 22:44:10 +00:00
```shell
2023-11-15 23:22:12 +00:00
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
2023-11-14 22:44:10 +00:00
```
2023-11-15 23:22:12 +00:00
##### Response
2023-11-14 22:44:10 +00:00
2023-11-15 23:22:12 +00:00
Return 201 Created if the blob was successfully created.
2023-11-14 22:44:10 +00:00
2023-08-08 22:41:19 +00:00
## List Local Models
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
GET /api/tags
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
List models that are available locally.
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
curl http://localhost:11434/api/tags
```
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
#### Response
2023-08-04 23:08:11 +00:00
2023-10-31 20:11:33 +00:00
A single JSON object will be returned.
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:27:47 +00:00
{
"models": [
{
2023-11-17 12:17:51 +00:00
"name": "llama2",
2023-08-08 22:41:19 +00:00
"modified_at": "2023-08-02T17:02:23.713454393-07:00",
"size": 3791730596
},
{
"name": "llama2:13b",
"modified_at": "2023-08-08T12:08:38.093596297-07:00",
2023-08-07 14:33:05 +00:00
"size": 7323310500
2023-08-04 19:27:47 +00:00
}
]
2023-08-04 23:08:11 +00:00
}
```
2023-09-14 15:51:26 +00:00
## Show Model Information
```shell
POST /api/show
```
Show details about a model including modelfile, template, parameters, license, and system prompt.
### Parameters
- `name` : name of the model to show
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-09-14 15:51:26 +00:00
2023-10-11 16:54:27 +00:00
```shell
2023-09-14 15:51:26 +00:00
curl http://localhost:11434/api/show -d '{
2023-11-17 12:17:51 +00:00
"name": "llama2"
2023-09-14 15:51:26 +00:00
}'
2023-08-04 23:08:11 +00:00
```
2023-09-14 15:51:26 +00:00
2023-11-03 14:57:00 +00:00
#### Response
2023-09-14 15:51:26 +00:00
```json
{
2023-10-11 16:54:27 +00:00
"license": "< contents of license block > ",
"modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llama2:latest\n\nFROM /Users/username/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8\nTEMPLATE \"\"\"[INST] {{ if and .First .System }}< < SYS > >{{ .System }}< </ SYS > >\n\n{{ end }}{{ .Prompt }} [/INST] \"\"\"\nSYSTEM \"\"\"\"\"\"\nPARAMETER stop [INST]\nPARAMETER stop [/INST]\nPARAMETER stop < < SYS > >\nPARAMETER stop < </ SYS > >\n",
"parameters": "stop [INST]\nstop [/INST]\nstop < < SYS > >\nstop < </ SYS > >",
"template": "[INST] {{ if and .First .System }}< < SYS > >{{ .System }}< </ SYS > >\n\n{{ end }}{{ .Prompt }} [/INST] "
2023-09-14 15:51:26 +00:00
}
```
## Copy a Model
```shell
2023-08-08 22:41:19 +00:00
POST /api/copy
2023-08-04 19:27:47 +00:00
```
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
Copy a model. Creates a model with another name from an existing model.
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
curl http://localhost:11434/api/copy -d '{
2023-11-17 12:17:51 +00:00
"source": "llama2",
2023-08-08 22:41:19 +00:00
"destination": "llama2-backup"
2023-08-04 23:08:11 +00:00
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-10-31 20:11:33 +00:00
The only response is a 200 OK if successful.
2023-08-04 19:27:47 +00:00
## Delete a Model
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
DELETE /api/delete
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
Delete a model and its data.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-10-31 20:11:33 +00:00
- `name` : model name to delete
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
curl -X DELETE http://localhost:11434/api/delete -d '{
"name": "llama2:13b"
2023-08-04 23:08:11 +00:00
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-10-31 20:11:33 +00:00
If successful, the only response is a 200 OK.
2023-08-04 18:55:00 +00:00
## Pull a Model
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
POST /api/pull
```
2023-09-14 15:51:26 +00:00
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `name` : name of the model to pull
2023-09-14 15:51:26 +00:00
- `insecure` : (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
2023-10-31 20:11:33 +00:00
- `stream` : (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/pull -d '{
2023-11-17 12:17:51 +00:00
"name": "llama2"
2023-08-08 22:41:19 +00:00
}'
2023-08-04 23:08:11 +00:00
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-04 18:55:00 +00:00
2023-10-31 20:11:33 +00:00
If `stream` is not specified, or set to `true` , a stream of JSON objects is returned:
The first object is the manifest:
```json
{
"status": "pulling manifest"
}
```
Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.
2023-08-08 22:41:19 +00:00
```json
2023-08-04 23:08:11 +00:00
{
2023-08-08 22:41:19 +00:00
"status": "downloading digestname",
"digest": "digestname",
2023-11-03 14:57:00 +00:00
"total": 2142590208,
2023-10-31 20:11:33 +00:00
"completed": 241970
}
```
After all the files are downloaded, the final responses are:
```json
{
"status": "verifying sha256 digest"
}
{
"status": "writing manifest"
}
{
"status": "removing any unused layers"
}
{
"status": "success"
}
```
if `stream` is set to false, then the response is a single JSON object:
```json
{
"status": "success"
2023-08-04 23:08:11 +00:00
}
```
2023-08-10 22:56:59 +00:00
2023-09-14 15:51:26 +00:00
## Push a Model
```shell
POST /api/push
```
Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.
### Parameters
- `name` : name of the model to push in the form of `<namespace>/<model>:<tag>`
2023-10-11 16:54:27 +00:00
- `insecure` : (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
2023-10-31 20:11:33 +00:00
- `stream` : (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
2023-09-14 15:51:26 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/push -d '{
2023-09-14 15:51:26 +00:00
"name": "mattw/pygmalion:latest"
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-10 22:56:59 +00:00
2023-10-31 20:11:33 +00:00
If `stream` is not specified, or set to `true` , a stream of JSON objects is returned:
2023-09-14 15:51:26 +00:00
```json
2023-10-11 16:54:27 +00:00
{ "status": "retrieving manifest" }
2023-08-10 22:56:59 +00:00
```
2023-09-14 15:51:26 +00:00
and then:
```json
{
2023-10-11 16:54:27 +00:00
"status": "starting upload",
"digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
"total": 1928429856
2023-09-14 15:51:26 +00:00
}
```
Then there is a series of uploading responses:
```json
{
2023-10-11 16:54:27 +00:00
"status": "starting upload",
"digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
"total": 1928429856
}
2023-09-14 15:51:26 +00:00
```
Finally, when the upload is complete:
```json
{"status":"pushing manifest"}
{"status":"success"}
```
2023-10-31 20:11:33 +00:00
If `stream` is set to `false` , then the response is a single JSON object:
```json
2023-11-03 14:57:00 +00:00
{ "status": "success" }
2023-10-31 20:11:33 +00:00
```
2023-09-14 15:51:26 +00:00
## Generate Embeddings
```shell
2023-08-10 22:56:59 +00:00
POST /api/embeddings
```
Generate embeddings from a model
### Parameters
- `model` : name of model to generate embeddings from
- `prompt` : text to generate embeddings for
2023-09-06 00:18:49 +00:00
Advanced parameters:
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-10 22:56:59 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/embeddings -d '{
2023-11-17 12:17:51 +00:00
"model": "llama2",
2023-08-10 22:56:59 +00:00
"prompt": "Here is an article about llamas..."
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-10 22:56:59 +00:00
```json
{
2023-10-17 13:00:15 +00:00
"embedding": [
2023-08-10 22:56:59 +00:00
0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
]
2023-10-09 20:01:46 +00:00
}
```