2023-08-04 18:55:00 +00:00
# API
2023-08-08 22:41:19 +00:00
## Endpoints
- [Generate a completion ](#generate-a-completion )
2023-12-10 18:53:36 +00:00
- [Generate a chat completion ](#generate-a-chat-completion )
2023-09-14 15:51:26 +00:00
- [Create a Model ](#create-a-model )
- [List Local Models ](#list-local-models )
- [Show Model Information ](#show-model-information )
- [Copy a Model ](#copy-a-model )
- [Delete a Model ](#delete-a-model )
- [Pull a Model ](#pull-a-model )
- [Push a Model ](#push-a-model )
- [Generate Embeddings ](#generate-embeddings )
2024-06-05 18:06:53 +00:00
- [List Running Models ](#list-running-models )
2023-09-14 15:51:26 +00:00
2023-08-08 22:41:19 +00:00
## Conventions
2023-08-07 20:17:16 +00:00
2023-08-08 22:41:19 +00:00
### Model names
2023-08-07 20:17:16 +00:00
2024-05-03 19:25:04 +00:00
Model names follow a `model:tag` format, where `model` can have an optional namespace such as `example/model` . Some examples are `orca-mini:3b-q4_1` and `llama3:70b` . The tag is optional and, if not provided, will default to `latest` . The tag is used to identify a specific version.
2023-08-07 20:17:16 +00:00
### Durations
2023-08-08 22:41:19 +00:00
All durations are returned in nanoseconds.
2023-08-07 20:17:16 +00:00
2023-09-30 04:45:52 +00:00
### Streaming responses
2024-06-29 23:22:49 +00:00
Certain endpoints stream responses as JSON objects. Streaming can be disabled by providing `{"stream": false}` for these endpoints.
2023-12-22 17:10:01 +00:00
2023-08-08 22:41:19 +00:00
## Generate a completion
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
POST /api/generate
```
2023-08-04 18:55:00 +00:00
2023-12-05 19:57:33 +00:00
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 19:30:23 +00:00
2023-08-08 22:41:19 +00:00
- `model` : (required) the [model name ](#model-names )
- `prompt` : the prompt to generate a response for
2024-07-22 20:34:56 +00:00
- `suffix` : the text after the model response
2023-12-22 17:10:01 +00:00
- `images` : (optional) a list of base64-encoded images (for multimodal models such as `llava` )
2023-08-04 23:08:11 +00:00
2023-10-11 16:54:27 +00:00
Advanced parameters (optional):
2023-08-04 19:27:47 +00:00
2023-11-14 21:12:30 +00:00
- `format` : the format to return a response in. Currently the only accepted value is `json`
2023-08-08 22:41:19 +00:00
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
2023-12-12 19:43:19 +00:00
- `system` : system message to (overrides what is defined in the `Modelfile` )
2024-01-03 16:00:59 +00:00
- `template` : the prompt template to use (overrides what is defined in the `Modelfile` )
2023-12-05 05:16:27 +00:00
- `context` : the context parameter returned from a previous request to `/generate` , this can be used to keep a short conversational memory
2023-10-31 20:11:33 +00:00
- `stream` : if `false` the response will be returned as a single response object, rather than a stream of objects
2024-02-06 16:00:05 +00:00
- `raw` : if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
- `keep_alive` : controls how long the model will stay loaded into memory following the request (default: `5m` )
2023-08-08 22:41:19 +00:00
2023-12-22 17:10:01 +00:00
#### JSON mode
2023-11-10 00:44:02 +00:00
2024-03-05 21:13:23 +00:00
Enable JSON mode by setting the `format` parameter to `json` . This will structure the response as a valid JSON object. See the JSON mode [example ](#request-json-mode ) below.
2023-11-19 03:59:26 +00:00
2024-07-22 20:34:56 +00:00
> [!IMPORTANT]
> It's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
2023-11-10 00:44:02 +00:00
2023-11-03 14:57:00 +00:00
### Examples
2023-12-22 17:10:01 +00:00
#### Generate request (Streaming)
##### Request
2023-08-04 19:27:47 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-08-08 22:41:19 +00:00
"prompt": "Why is the sky blue?"
}'
```
2023-08-05 00:41:28 +00:00
2023-12-22 17:10:01 +00:00
##### Response
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
A stream of JSON objects is returned:
2023-08-04 19:27:47 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:27:47 +00:00
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-08-08 22:41:19 +00:00
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"response": "The",
2023-08-04 19:27:47 +00:00
"done": false
}
```
2023-08-08 22:41:19 +00:00
The final response in the stream also includes additional data about the generation:
2023-08-04 19:38:58 +00:00
2023-08-08 22:46:05 +00:00
- `total_duration` : time spent generating the response
- `load_duration` : time spent in nanoseconds loading the model
- `prompt_eval_count` : number of tokens in the prompt
- `prompt_eval_duration` : time spent in nanoseconds evaluating the prompt
2024-04-20 19:17:03 +00:00
- `eval_count` : number of tokens in the response
2023-08-08 22:46:05 +00:00
- `eval_duration` : time in nanoseconds spent generating the response
2023-12-05 05:16:27 +00:00
- `context` : an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
2023-10-11 16:54:27 +00:00
- `response` : empty if the response was streamed, if not streamed, this will contain the full response
2023-08-08 22:46:05 +00:00
2024-05-06 21:39:58 +00:00
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration` * `10^9` .
2023-08-08 22:46:05 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:38:58 +00:00
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-08-08 22:41:19 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
2023-10-11 16:54:27 +00:00
"response": "",
2023-08-08 22:41:19 +00:00
"done": true,
2023-12-22 17:10:01 +00:00
"context": [1, 2, 3],
2023-12-27 19:32:35 +00:00
"total_duration": 10706818083,
"load_duration": 6338219291,
"prompt_eval_count": 26,
"prompt_eval_duration": 130079000,
"eval_count": 259,
"eval_duration": 4232710000
2023-08-08 22:41:19 +00:00
}
2023-08-04 19:38:58 +00:00
```
2023-11-10 00:44:02 +00:00
#### Request (No streaming)
2023-11-03 14:57:00 +00:00
2023-12-22 17:10:01 +00:00
##### Request
A response can be received in one reply when streaming is off.
2023-12-05 19:57:33 +00:00
2023-11-03 14:57:00 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-11-03 14:57:00 +00:00
"prompt": "Why is the sky blue?",
"stream": false
}'
```
2023-12-22 17:10:01 +00:00
##### Response
2023-11-03 14:57:00 +00:00
2023-10-31 20:11:33 +00:00
If `stream` is set to `false` , the response will be a single JSON object:
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-10-31 20:11:33 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "The sky is blue because it is the color of the sky.",
2023-12-22 17:10:01 +00:00
"done": true,
2023-10-31 20:11:33 +00:00
"context": [1, 2, 3],
2023-12-22 17:10:01 +00:00
"total_duration": 5043500667,
"load_duration": 5025959,
"prompt_eval_count": 26,
"prompt_eval_duration": 325953000,
"eval_count": 290,
"eval_duration": 4709213000
}
```
2024-07-22 20:34:56 +00:00
#### Request (with suffix)
##### Request
```shell
curl http://localhost:11434/api/generate -d '{
"model": "codellama:code",
"prompt": "def compute_gcd(a, b):",
"suffix": " return result",
"options": {
"temperature": 0
},
"stream": false
}'
```
##### Response
```json
{
"model": "codellama:code",
"created_at": "2024-07-22T20:47:51.147561Z",
"response": "\n if a == 0:\n return b\n else:\n return compute_gcd(b % a, a)\n\ndef compute_lcm(a, b):\n result = (a * b) / compute_gcd(a, b)\n",
"done": true,
"done_reason": "stop",
"context": [...],
"total_duration": 1162761250,
"load_duration": 6683708,
"prompt_eval_count": 17,
"prompt_eval_duration": 201222000,
"eval_count": 63,
"eval_duration": 953997000
}
```
2023-12-22 17:10:01 +00:00
#### Request (JSON mode)
2024-07-22 20:34:56 +00:00
> [!IMPORTANT]
2023-12-22 17:10:01 +00:00
> When `format` is set to `json`, the output will always be a well-formed JSON object. It's important to also instruct the model to respond in JSON.
##### Request
```shell
curl http://localhost:11434/api/generate -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-22 17:10:01 +00:00
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
}'
```
##### Response
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-22 17:10:01 +00:00
"created_at": "2023-11-09T21:07:55.186497Z",
"response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
2023-10-31 20:11:33 +00:00
"done": true,
2023-12-27 19:32:35 +00:00
"context": [1, 2, 3],
2023-12-22 17:10:01 +00:00
"total_duration": 4648158584,
"load_duration": 4071084,
"prompt_eval_count": 36,
"prompt_eval_duration": 439038000,
"eval_count": 180,
"eval_duration": 4196918000
}
```
The value of `response` will be a string containing JSON similar to:
```json
{
"morning": {
"color": "blue"
},
"noon": {
"color": "blue-gray"
},
"afternoon": {
"color": "warm gray"
},
"evening": {
"color": "orange"
}
2023-10-31 20:11:33 +00:00
}
```
2023-12-13 18:59:33 +00:00
#### Request (with images)
To submit images to multimodal models such as `llava` or `bakllava` , provide a list of base64-encoded `images` :
2023-12-22 17:10:01 +00:00
#### Request
2023-12-13 18:59:33 +00:00
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llava",
"prompt":"What is in this picture?",
"stream": false,
"images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1k
}'
```
#### Response
```
{
"model": "llava",
"created_at": "2023-11-03T15:36:02.583064Z",
"response": "A happy cartoon character, which is cute and cheerful.",
"done": true,
2023-12-22 17:10:01 +00:00
"context": [1, 2, 3],
"total_duration": 2938432250,
"load_duration": 2559292,
"prompt_eval_count": 1,
"prompt_eval_duration": 2195557000,
"eval_count": 44,
"eval_duration": 736432000
2023-12-13 18:59:33 +00:00
}
```
2023-12-05 19:57:33 +00:00
#### Request (Raw Mode)
2023-11-08 22:05:02 +00:00
2023-12-22 17:10:01 +00:00
In some cases, you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable templating. Also note that raw mode will not return a context.
2023-12-27 19:32:35 +00:00
2023-12-22 17:10:01 +00:00
##### Request
2023-11-08 22:05:02 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2023-11-08 22:05:02 +00:00
"model": "mistral",
"prompt": "[INST] why is the sky blue? [/INST]",
"raw": true,
"stream": false
}'
```
2024-02-20 01:36:16 +00:00
#### Request (Reproducible outputs)
2024-06-11 21:24:41 +00:00
For reproducible outputs, set `seed` to a number:
2024-02-20 01:36:16 +00:00
##### Request
```shell
curl http://localhost:11434/api/generate -d '{
"model": "mistral",
2024-03-05 21:13:23 +00:00
"prompt": "Why is the sky blue?",
2024-02-20 01:36:16 +00:00
"options": {
2024-06-11 21:24:41 +00:00
"seed": 123
2024-02-20 01:36:16 +00:00
}
}'
```
2023-12-22 17:10:01 +00:00
##### Response
2023-11-08 22:05:02 +00:00
```json
{
"model": "mistral",
"created_at": "2023-11-03T15:36:02.583064Z",
"response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
"done": true,
2023-12-22 17:10:01 +00:00
"total_duration": 8493852375,
"load_duration": 6589624375,
2023-11-08 22:05:02 +00:00
"prompt_eval_count": 14,
2023-12-22 17:10:01 +00:00
"prompt_eval_duration": 119039000,
"eval_count": 110,
"eval_duration": 1779061000
2023-11-10 00:44:02 +00:00
}
```
2023-12-22 17:10:01 +00:00
#### Generate request (With options)
2023-11-09 00:44:36 +00:00
If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.
2023-12-22 17:10:01 +00:00
##### Request
2023-11-09 00:44:36 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/generate -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-11-09 00:44:36 +00:00
"prompt": "Why is the sky blue?",
"stream": false,
"options": {
"num_keep": 5,
"seed": 42,
"num_predict": 100,
"top_k": 20,
"top_p": 0.9,
2024-07-27 21:37:40 +00:00
"min_p": 0.0,
2023-11-09 00:44:36 +00:00
"tfs_z": 0.5,
"typical_p": 0.7,
"repeat_last_n": 33,
"temperature": 0.8,
"repeat_penalty": 1.2,
"presence_penalty": 1.5,
"frequency_penalty": 1.0,
"mirostat": 1,
"mirostat_tau": 0.8,
"mirostat_eta": 0.6,
"penalize_newline": true,
"stop": ["\n", "user:"],
"numa": false,
2023-12-10 15:53:38 +00:00
"num_ctx": 1024,
2023-11-09 00:44:36 +00:00
"num_batch": 2,
"num_gpu": 1,
"main_gpu": 0,
"low_vram": false,
"f16_kv": true,
"vocab_only": false,
"use_mmap": true,
"use_mlock": false,
2023-12-27 19:32:35 +00:00
"num_thread": 8
2023-12-10 15:53:38 +00:00
}
2023-11-09 00:44:36 +00:00
}'
```
2023-12-22 17:10:01 +00:00
##### Response
2023-11-09 00:44:36 +00:00
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-11-09 00:44:36 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "The sky is blue because it is the color of the sky.",
"done": true,
2023-12-27 19:32:35 +00:00
"context": [1, 2, 3],
2023-12-22 17:10:01 +00:00
"total_duration": 4935886791,
"load_duration": 534986708,
"prompt_eval_count": 26,
"prompt_eval_duration": 107345000,
"eval_count": 237,
"eval_duration": 4289432000
}
```
#### Load a model
If an empty prompt is provided, the model will be loaded into memory.
##### Request
```shell
curl http://localhost:11434/api/generate -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3"
2023-12-22 17:10:01 +00:00
}'
```
##### Response
A single JSON object is returned:
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-27 19:32:35 +00:00
"created_at": "2023-12-18T19:52:07.071755Z",
"response": "",
"done": true
2023-11-09 00:44:36 +00:00
}
```
2023-12-10 18:53:36 +00:00
## Generate a chat completion
2023-12-06 20:10:20 +00:00
2023-12-05 19:57:33 +00:00
```shell
POST /api/chat
```
2023-12-22 17:10:01 +00:00
Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. Streaming can be disabled using `"stream": false` . The final response object will include statistics and additional data from the request.
2023-12-05 19:57:33 +00:00
### Parameters
- `model` : (required) the [model name ](#model-names )
- `messages` : the messages of the chat, this can be used to keep a chat memory
2024-07-26 03:10:18 +00:00
- `tools` : tools for the model to use if supported. Requires `stream` to be set to `false`
2023-12-05 19:57:33 +00:00
2023-12-13 18:59:33 +00:00
The `message` object has the following fields:
2024-07-22 20:34:56 +00:00
- `role` : the role of the message, either `system` , `user` , `assistant` , or `tool`
2023-12-13 18:59:33 +00:00
- `content` : the content of the message
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava` )
2024-07-22 20:34:56 +00:00
- `tool_calls` (optional): a list of tools the model wants to use
2023-12-13 16:21:23 +00:00
2023-12-05 19:57:33 +00:00
Advanced parameters (optional):
- `format` : the format to return a response in. Currently the only accepted value is `json`
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
- `stream` : if `false` the response will be returned as a single response object, rather than a stream of objects
2024-02-06 16:00:05 +00:00
- `keep_alive` : controls how long the model will stay loaded into memory following the request (default: `5m` )
2023-12-05 19:57:33 +00:00
### Examples
2023-12-22 17:10:01 +00:00
#### Chat Request (Streaming)
##### Request
2023-12-06 20:10:20 +00:00
2023-12-05 19:57:33 +00:00
Send a chat message with a streaming response.
```shell
2023-12-06 20:10:20 +00:00
curl http://localhost:11434/api/chat -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-05 19:57:33 +00:00
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
]
}'
```
2023-12-22 17:10:01 +00:00
##### Response
2023-12-05 19:57:33 +00:00
A stream of JSON objects is returned:
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-05 19:57:33 +00:00
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"message": {
2024-01-09 21:21:17 +00:00
"role": "assistant",
2023-12-27 19:32:35 +00:00
"content": "The",
2023-12-22 17:10:01 +00:00
"images": null
2023-12-05 19:57:33 +00:00
},
"done": false
}
```
Final response:
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-05 19:57:33 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
"done": true,
2023-12-27 19:32:35 +00:00
"total_duration": 4883583458,
"load_duration": 1334875,
"prompt_eval_count": 26,
"prompt_eval_duration": 342546000,
"eval_count": 282,
"eval_duration": 4535599000
2023-12-05 19:57:33 +00:00
}
```
2023-12-22 17:10:01 +00:00
#### Chat request (No streaming)
2023-12-06 20:10:20 +00:00
2023-12-22 17:10:01 +00:00
##### Request
```shell
curl http://localhost:11434/api/chat -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-22 17:10:01 +00:00
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
2023-12-27 19:32:35 +00:00
],
2023-12-22 17:10:01 +00:00
"stream": false
}'
```
##### Response
```json
{
2024-05-03 19:25:04 +00:00
"model": "registry.ollama.ai/library/llama3:latest",
2023-12-22 17:10:01 +00:00
"created_at": "2023-12-12T14:13:43.416799Z",
"message": {
"role": "assistant",
"content": "Hello! How are you today?"
},
"done": true,
"total_duration": 5191566416,
"load_duration": 2154458,
"prompt_eval_count": 26,
"prompt_eval_duration": 383809000,
"eval_count": 298,
"eval_duration": 4799921000
}
```
#### Chat request (With History)
Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.
##### Request
2023-12-05 19:57:33 +00:00
```shell
2023-12-06 20:10:20 +00:00
curl http://localhost:11434/api/chat -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-05 19:57:33 +00:00
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
},
{
"role": "assistant",
"content": "due to rayleigh scattering."
},
{
"role": "user",
"content": "how is that different than mie scattering?"
}
]
}'
```
2023-12-22 17:10:01 +00:00
##### Response
2023-12-05 19:57:33 +00:00
A stream of JSON objects is returned:
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-05 19:57:33 +00:00
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"message": {
2024-01-09 21:21:17 +00:00
"role": "assistant",
2023-12-05 19:57:33 +00:00
"content": "The"
},
"done": false
}
```
Final response:
```json
{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2023-12-05 19:57:33 +00:00
"created_at": "2023-08-04T19:22:45.499127Z",
"done": true,
2023-12-27 19:32:35 +00:00
"total_duration": 8113331500,
"load_duration": 6396458,
"prompt_eval_count": 61,
"prompt_eval_duration": 398801000,
"eval_count": 468,
"eval_duration": 7701267000
2023-12-05 19:57:33 +00:00
}
```
2023-12-22 17:10:01 +00:00
#### Chat request (with images)
##### Request
2023-12-13 18:59:33 +00:00
2024-07-29 15:50:53 +00:00
Send a chat message with images. The images should be provided as an array, with the individual images encoded in Base64.
2023-12-13 18:59:33 +00:00
```shell
curl http://localhost:11434/api/chat -d '{
2023-12-22 17:10:01 +00:00
"model": "llava",
2023-12-13 18:59:33 +00:00
"messages": [
{
"role": "user",
"content": "what is in this image?",
"images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7
2024-02-02 20:17:24 +00:00
}
2023-12-13 18:59:33 +00:00
]
}'
```
2023-12-22 17:10:01 +00:00
##### Response
```json
{
"model": "llava",
"created_at": "2023-12-13T22:42:50.203334Z",
"message": {
"role": "assistant",
"content": " The image features a cute, little pig with an angry facial expression. It's wearing a heart on its shirt and is waving in the air. This scene appears to be part of a drawing or sketching project.",
"images": null
},
"done": true,
2023-12-27 19:32:35 +00:00
"total_duration": 1668506709,
"load_duration": 1986209,
"prompt_eval_count": 26,
"prompt_eval_duration": 359682000,
"eval_count": 83,
"eval_duration": 1303285000
2023-12-22 17:10:01 +00:00
}
```
2024-02-20 01:36:16 +00:00
#### Chat request (Reproducible outputs)
##### Request
```shell
curl http://localhost:11434/api/chat -d '{
2024-05-03 19:25:04 +00:00
"model": "llama3",
2024-02-20 01:36:16 +00:00
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"options": {
"seed": 101,
"temperature": 0
}
}'
```
##### Response
```json
{
2024-05-03 19:25:04 +00:00
"model": "registry.ollama.ai/library/llama3:latest",
2024-02-20 01:36:16 +00:00
"created_at": "2023-12-12T14:13:43.416799Z",
"message": {
"role": "assistant",
"content": "Hello! How are you today?"
},
"done": true,
"total_duration": 5191566416,
"load_duration": 2154458,
"prompt_eval_count": 26,
"prompt_eval_duration": 383809000,
"eval_count": 298,
"eval_duration": 4799921000
}
```
2024-07-22 20:34:56 +00:00
#### Chat request (with tools)
##### Request
```
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{
"role": "user",
"content": "What is the weather today in Paris?"
}
],
"stream": false,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for, e.g. San Francisco, CA"
},
"format": {
"type": "string",
"description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "format"]
}
}
}
]
}'
```
##### Response
```json
{
"model": "mistral:7b-instruct-v0.3-q4_K_M",
"created_at": "2024-07-22T20:33:28.123648Z",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "get_current_weather",
"arguments": {
"format": "celsius",
"location": "Paris, FR"
}
}
}
]
},
"done_reason": "stop",
"done": true,
"total_duration": 885095291,
"load_duration": 3753500,
"prompt_eval_count": 122,
"prompt_eval_duration": 328493000,
"eval_count": 33,
"eval_duration": 552222000
}
```
2023-08-04 18:55:00 +00:00
## Create a Model
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
POST /api/create
```
2023-12-27 19:32:35 +00:00
Create a model from a [`Modelfile` ](./modelfile.md ). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path` . This is a requirement for remote create. Remote model creation must also create any file blobs, fields such as `FROM` and `ADAPTER` , explicitly with the server using [Create a Blob ](#create-a-blob ) and the value to the path indicated in the response.
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `name` : name of the model to create
2023-11-21 20:32:05 +00:00
- `modelfile` (optional): contents of the Modelfile
2023-10-31 20:11:33 +00:00
- `stream` : (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
2023-11-21 20:32:05 +00:00
- `path` (optional): path to the Modelfile
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
2023-12-22 17:10:01 +00:00
#### Create a new model
Create a new model from a `Modelfile` .
##### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/create -d '{
2023-08-08 22:41:19 +00:00
"name": "mario",
2024-05-03 19:25:04 +00:00
"modelfile": "FROM llama3\nSYSTEM You are mario from Super Mario Bros."
2023-08-08 22:41:19 +00:00
}'
2023-08-04 23:08:11 +00:00
```
2023-12-22 17:10:01 +00:00
##### Response
2023-08-04 23:08:11 +00:00
2023-12-22 17:10:01 +00:00
A stream of JSON objects. Notice that the final JSON object shows a `"status": "success"` .
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
```json
2023-12-22 17:10:01 +00:00
{"status":"reading model metadata"}
{"status":"creating system layer"}
{"status":"using already created layer sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2"}
{"status":"using already created layer sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b"}
{"status":"using already created layer sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d"}
{"status":"using already created layer sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988"}
{"status":"using already created layer sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9"}
{"status":"writing layer sha256:df30045fe90f0d750db82a058109cecd6d4de9c90a3d75b19c09e5f64580bb42"}
{"status":"writing layer sha256:f18a68eb09bf925bb1b669490407c1b1251c5db98dc4d3d81f3088498ea55690"}
{"status":"writing manifest"}
{"status":"success"}
2023-08-04 23:08:11 +00:00
```
2023-11-15 23:22:12 +00:00
### Check if a Blob Exists
```shell
HEAD /api/blobs/:digest
```
2023-12-22 17:10:01 +00:00
Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not Ollama.ai.
2023-11-15 23:22:12 +00:00
#### Query Parameters
- `digest` : the SHA256 digest of the blob
#### Examples
##### Request
```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```
##### Response
Return 200 OK if the blob exists, 404 Not Found if it does not.
2023-11-15 19:01:32 +00:00
### Create a Blob
2023-11-14 22:44:10 +00:00
```shell
POST /api/blobs/:digest
```
2023-12-22 17:10:01 +00:00
Create a blob from a file on the server. Returns the server file path.
2023-11-14 22:44:10 +00:00
2023-11-15 19:01:32 +00:00
#### Query Parameters
2023-11-14 22:44:10 +00:00
- `digest` : the expected SHA256 digest of the file
2023-11-15 19:01:32 +00:00
#### Examples
2023-11-14 22:44:10 +00:00
2023-11-15 23:22:12 +00:00
##### Request
2023-11-14 22:44:10 +00:00
```shell
2023-11-15 23:22:12 +00:00
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
2023-11-14 22:44:10 +00:00
```
2023-11-15 23:22:12 +00:00
##### Response
2023-11-14 22:44:10 +00:00
2023-12-22 17:10:01 +00:00
Return 201 Created if the blob was successfully created, 400 Bad Request if the digest used is not expected.
2023-11-14 22:44:10 +00:00
2023-08-08 22:41:19 +00:00
## List Local Models
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
GET /api/tags
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
List models that are available locally.
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
curl http://localhost:11434/api/tags
```
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
#### Response
2023-08-04 23:08:11 +00:00
2023-10-31 20:11:33 +00:00
A single JSON object will be returned.
2023-08-08 22:41:19 +00:00
```json
2023-08-04 19:27:47 +00:00
{
"models": [
{
2023-12-22 17:10:01 +00:00
"name": "codellama:13b",
"modified_at": "2023-11-04T14:56:49.277302595-07:00",
"size": 7365960935,
"digest": "9f438cb9cd581fc025612d27f7c1a6669ff83a8bb0ed86c94fcf4c5440555697",
"details": {
"format": "gguf",
"family": "llama",
"families": null,
"parameter_size": "13B",
"quantization_level": "Q4_0"
}
2023-08-08 22:41:19 +00:00
},
{
2024-05-03 19:25:04 +00:00
"name": "llama3:latest",
2023-12-22 17:10:01 +00:00
"modified_at": "2023-12-07T09:32:18.757212583-08:00",
"size": 3825819519,
"digest": "fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e",
"details": {
"format": "gguf",
"family": "llama",
"families": null,
"parameter_size": "7B",
"quantization_level": "Q4_0"
}
2023-08-04 19:27:47 +00:00
}
]
2023-08-04 23:08:11 +00:00
}
```
2023-09-14 15:51:26 +00:00
## Show Model Information
```shell
POST /api/show
```
2024-06-19 21:19:02 +00:00
Show information about a model including details, modelfile, template, parameters, license, system prompt.
2023-09-14 15:51:26 +00:00
### Parameters
- `name` : name of the model to show
2024-06-19 21:19:02 +00:00
- `verbose` : (optional) if set to `true` , returns full data for verbose response fields
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-09-14 15:51:26 +00:00
2023-10-11 16:54:27 +00:00
```shell
2023-09-14 15:51:26 +00:00
curl http://localhost:11434/api/show -d '{
2024-05-03 19:25:04 +00:00
"name": "llama3"
2023-09-14 15:51:26 +00:00
}'
2023-08-04 23:08:11 +00:00
```
2023-09-14 15:51:26 +00:00
2023-11-03 14:57:00 +00:00
#### Response
2023-09-14 15:51:26 +00:00
```json
{
2024-05-13 01:21:11 +00:00
"modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llava:latest\n\nFROM /Users/matt/.ollama/models/blobs/sha256:200765e1283640ffbd013184bf496e261032fa75b99498a9613be4e94d63ad52\nTEMPLATE \"\"\"{{ .System }}\nUSER: {{ .Prompt }}\nASSISTANT: \"\"\"\nPARAMETER num_ctx 4096\nPARAMETER stop \"\u003c/s\u003e\"\nPARAMETER stop \"USER:\"\nPARAMETER stop \"ASSISTANT:\"",
2024-06-19 21:19:02 +00:00
"parameters": "num_keep 24\nstop \"< |start_header_id|>\"\nstop \"< |end_header_id|>\"\nstop \"< |eot_id|>\"",
"template": "{{ if .System }}< |start_header_id|>system< |end_header_id|>\n\n{{ .System }}< |eot_id|>{{ end }}{{ if .Prompt }}< |start_header_id|>user< |end_header_id|>\n\n{{ .Prompt }}< |eot_id|>{{ end }}< |start_header_id|>assistant< |end_header_id|>\n\n{{ .Response }}< |eot_id|>",
2023-12-13 18:59:33 +00:00
"details": {
2024-06-19 21:19:02 +00:00
"parent_model": "",
2023-12-13 18:59:33 +00:00
"format": "gguf",
2023-12-22 17:10:01 +00:00
"family": "llama",
2024-06-19 21:19:02 +00:00
"families": [
"llama"
],
"parameter_size": "8.0B",
2023-12-13 18:59:33 +00:00
"quantization_level": "Q4_0"
2024-06-19 21:19:02 +00:00
},
"model_info": {
"general.architecture": "llama",
"general.file_type": 2,
"general.parameter_count": 8030261248,
"general.quantization_version": 2,
"llama.attention.head_count": 32,
"llama.attention.head_count_kv": 8,
"llama.attention.layer_norm_rms_epsilon": 0.00001,
"llama.block_count": 32,
"llama.context_length": 8192,
"llama.embedding_length": 4096,
"llama.feed_forward_length": 14336,
"llama.rope.dimension_count": 128,
"llama.rope.freq_base": 500000,
"llama.vocab_size": 128256,
"tokenizer.ggml.bos_token_id": 128000,
"tokenizer.ggml.eos_token_id": 128009,
"tokenizer.ggml.merges": [], // populates if `verbose=true`
"tokenizer.ggml.model": "gpt2",
"tokenizer.ggml.pre": "llama-bpe",
"tokenizer.ggml.token_type": [], // populates if `verbose=true`
"tokenizer.ggml.tokens": [] // populates if `verbose=true`
2023-12-13 18:59:33 +00:00
}
2023-09-14 15:51:26 +00:00
}
```
## Copy a Model
```shell
2023-08-08 22:41:19 +00:00
POST /api/copy
2023-08-04 19:27:47 +00:00
```
2023-08-04 18:55:00 +00:00
2023-08-08 22:41:19 +00:00
Copy a model. Creates a model with another name from an existing model.
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
curl http://localhost:11434/api/copy -d '{
2024-05-03 19:25:04 +00:00
"source": "llama3",
"destination": "llama3-backup"
2023-08-04 23:08:11 +00:00
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-10-31 20:11:33 +00:00
2023-12-22 17:10:01 +00:00
Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't exist.
2023-10-31 20:11:33 +00:00
2023-08-04 19:27:47 +00:00
## Delete a Model
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
DELETE /api/delete
2023-08-04 23:08:11 +00:00
```
2023-08-08 22:41:19 +00:00
Delete a model and its data.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-10-31 20:11:33 +00:00
- `name` : model name to delete
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
curl -X DELETE http://localhost:11434/api/delete -d '{
2024-05-03 19:25:04 +00:00
"name": "llama3:13b"
2023-08-04 23:08:11 +00:00
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-10-31 20:11:33 +00:00
2023-12-22 17:10:01 +00:00
Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't exist.
2023-10-31 20:11:33 +00:00
2023-08-04 18:55:00 +00:00
## Pull a Model
2023-08-04 19:38:58 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-08-08 22:41:19 +00:00
POST /api/pull
```
2023-09-14 15:51:26 +00:00
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
### Parameters
2023-08-04 23:08:11 +00:00
2023-08-08 22:41:19 +00:00
- `name` : name of the model to pull
2023-09-14 15:51:26 +00:00
- `insecure` : (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
2023-10-31 20:11:33 +00:00
- `stream` : (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
2023-08-04 23:08:11 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-04 23:08:11 +00:00
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/pull -d '{
2024-05-03 19:25:04 +00:00
"name": "llama3"
2023-08-08 22:41:19 +00:00
}'
2023-08-04 23:08:11 +00:00
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-04 18:55:00 +00:00
2023-10-31 20:11:33 +00:00
If `stream` is not specified, or set to `true` , a stream of JSON objects is returned:
The first object is the manifest:
```json
{
"status": "pulling manifest"
}
```
Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.
2023-08-08 22:41:19 +00:00
```json
2023-08-04 23:08:11 +00:00
{
2023-08-08 22:41:19 +00:00
"status": "downloading digestname",
"digest": "digestname",
2023-11-03 14:57:00 +00:00
"total": 2142590208,
2023-10-31 20:11:33 +00:00
"completed": 241970
}
```
After all the files are downloaded, the final responses are:
```json
{
"status": "verifying sha256 digest"
}
{
"status": "writing manifest"
}
{
"status": "removing any unused layers"
}
{
"status": "success"
}
```
if `stream` is set to false, then the response is a single JSON object:
```json
{
"status": "success"
2023-08-04 23:08:11 +00:00
}
```
2023-08-10 22:56:59 +00:00
2023-09-14 15:51:26 +00:00
## Push a Model
```shell
POST /api/push
```
Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.
### Parameters
- `name` : name of the model to push in the form of `<namespace>/<model>:<tag>`
2023-10-11 16:54:27 +00:00
- `insecure` : (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
2023-10-31 20:11:33 +00:00
- `stream` : (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
2023-09-14 15:51:26 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-09-14 15:51:26 +00:00
```shell
2023-11-17 14:50:38 +00:00
curl http://localhost:11434/api/push -d '{
2023-09-14 15:51:26 +00:00
"name": "mattw/pygmalion:latest"
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-10 22:56:59 +00:00
2023-10-31 20:11:33 +00:00
If `stream` is not specified, or set to `true` , a stream of JSON objects is returned:
2023-09-14 15:51:26 +00:00
```json
2023-10-11 16:54:27 +00:00
{ "status": "retrieving manifest" }
2023-08-10 22:56:59 +00:00
```
2023-09-14 15:51:26 +00:00
and then:
```json
{
2023-10-11 16:54:27 +00:00
"status": "starting upload",
"digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
"total": 1928429856
2023-09-14 15:51:26 +00:00
}
```
Then there is a series of uploading responses:
```json
{
2023-10-11 16:54:27 +00:00
"status": "starting upload",
"digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
"total": 1928429856
}
2023-09-14 15:51:26 +00:00
```
Finally, when the upload is complete:
```json
{"status":"pushing manifest"}
{"status":"success"}
```
2023-10-31 20:11:33 +00:00
If `stream` is set to `false` , then the response is a single JSON object:
```json
2023-11-03 14:57:00 +00:00
{ "status": "success" }
2023-10-31 20:11:33 +00:00
```
2023-09-14 15:51:26 +00:00
## Generate Embeddings
```shell
2024-07-22 20:37:08 +00:00
POST /api/embed
2023-08-10 22:56:59 +00:00
```
Generate embeddings from a model
### Parameters
- `model` : name of model to generate embeddings from
2024-07-22 20:37:08 +00:00
- `input` : text or list of text to generate embeddings for
2023-08-10 22:56:59 +00:00
2023-09-06 00:18:49 +00:00
Advanced parameters:
2024-07-22 20:37:08 +00:00
- `truncate` : truncates the end of each input to fit within context length. Returns error if `false` and context length is exceeded. Defaults to `true`
2023-09-06 00:18:49 +00:00
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
2024-02-06 16:00:05 +00:00
- `keep_alive` : controls how long the model will stay loaded into memory following the request (default: `5m` )
2023-09-06 00:18:49 +00:00
2023-11-03 14:57:00 +00:00
### Examples
#### Request
2023-08-10 22:56:59 +00:00
2023-09-14 15:51:26 +00:00
```shell
2024-07-22 20:37:08 +00:00
curl http://localhost:11434/api/embed -d '{
2024-03-08 07:27:51 +00:00
"model": "all-minilm",
2024-07-22 20:37:08 +00:00
"input": "Why is the sky blue?"
2023-08-10 22:56:59 +00:00
}'
```
2023-11-03 14:57:00 +00:00
#### Response
2023-08-10 22:56:59 +00:00
```json
{
2024-07-22 20:37:08 +00:00
"model": "all-minilm",
"embeddings": [[
0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.054916814,
0.008599704, 0.105441414, -0.025878139, 0.12958129, 0.031952348
]]
}
```
#### Request (Multiple input)
```shell
curl http://localhost:11434/api/embed -d '{
"model": "all-minilm",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}'
```
#### Response
```json
{
"model": "all-minilm",
"embeddings": [[
0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.054916814,
0.008599704, 0.105441414, -0.025878139, 0.12958129, 0.031952348
],[
-0.0098027075, 0.06042469, 0.025257962, -0.006364387, 0.07272725,
0.017194884, 0.09032035, -0.051705178, 0.09951512, 0.09072481
]]
2023-10-09 20:01:46 +00:00
}
```
2024-06-05 18:06:53 +00:00
## List Running Models
```shell
GET /api/ps
```
List models that are currently loaded into memory.
#### Examples
### Request
2024-06-09 06:04:32 +00:00
2024-06-05 18:06:53 +00:00
```shell
curl http://localhost:11434/api/ps
```
#### Response
A single JSON object will be returned.
```json
{
"models": [
{
"name": "mistral:latest",
"model": "mistral:latest",
"size": 5137025024,
"digest": "2ae6f6dd7a3dd734790bbbf58b8909a606e0e7e97e94b7604e0aa7ae4490e6d8",
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": [
"llama"
],
"parameter_size": "7.2B",
"quantization_level": "Q4_0"
},
"expires_at": "2024-06-04T14:38:31.83753-07:00",
"size_vram": 5137025024
}
]
}
2024-06-09 06:04:32 +00:00
```
2024-07-22 20:37:08 +00:00
## Generate Embedding
> Note: this endpoint has been superseded by `/api/embed`
```shell
POST /api/embeddings
```
Generate embeddings from a model
### Parameters
- `model` : name of model to generate embeddings from
- `prompt` : text to generate embeddings for
Advanced parameters:
- `options` : additional model parameters listed in the documentation for the [Modelfile ](./modelfile.md#valid-parameters-and-values ) such as `temperature`
- `keep_alive` : controls how long the model will stay loaded into memory following the request (default: `5m` )
### Examples
#### Request
```shell
curl http://localhost:11434/api/embeddings -d '{
"model": "all-minilm",
"prompt": "Here is an article about llamas..."
}'
```
#### Response
```json
{
"embedding": [
0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
]
}
2024-07-26 03:10:18 +00:00
```