Michael Yang
386169205c
update runtime options ( #864 )
2023-10-20 21:17:14 -04:00
Jeffrey Morgan
7ed5a39bc7
simpler check for model loading compatibility errors
2023-10-19 14:50:49 -04:00
Michael Yang
e1c5be24e7
check json eof
2023-10-19 09:21:51 -07:00
Michael Yang
2ad8a074ac
generate: set created_at
...
move the empty response so it's more visible
2023-10-19 09:21:51 -07:00
Michael Yang
7e547c6833
s/message/error/
2023-10-19 09:21:04 -07:00
Michael Yang
689842b9ff
request: bad request when model missing fields
2023-10-19 09:21:04 -07:00
Michael Yang
a19d47642e
models: rm workDir from CreateModel
...
unused after removing EMBED
2023-10-19 09:21:04 -07:00
Bruce MacDonald
fe6f3b48f7
do not reload the running llm when runtime params change ( #840 )
...
- only reload the running llm if the model has changed, or the options for loading the running model have changed
- rename loaded llm to runner to differentiate from loaded model image
- remove logic which keeps the first system prompt in the generation context
2023-10-19 10:39:58 -04:00
Yiorgis Gozadinos
8c6c2cbc8c
When the .ollama folder is broken or there are no models return an empty list on /api/tags
2023-10-18 08:23:20 +02:00
Michael Yang
1af493c5a0
server: print version on start
2023-10-16 09:59:14 -07:00
Bruce MacDonald
a0c3e989de
deprecate modelfile embed command ( #759 )
2023-10-16 11:07:37 -04:00
Bruce MacDonald
7804b8fab9
validate api options fields from map ( #711 )
2023-10-12 11:18:11 -04:00
Bruce MacDonald
274d5a5fdf
optional parameter to not stream response ( #639 )
...
* update streaming request accept header
* add optional stream param to request bodies
2023-10-11 12:54:27 -04:00
Bruce MacDonald
af4cf55884
not found error before pulling model ( #718 )
2023-10-06 16:06:20 -04:00
Jay Nakrani
1d0ebe67e8
Document response stream chunk delimiter. ( #632 )
...
Document response stream chunk delimiter.
2023-09-29 21:45:52 -07:00
Bruce MacDonald
a1b2d95f96
remove unused push/pull params ( #650 )
2023-09-29 17:27:19 -04:00
Michael Yang
8608eb4760
prune empty directories
2023-09-27 10:58:09 -07:00
Jeffrey Morgan
9b12a511ca
check other request fields before load short circuit in /api/generate
2023-09-22 23:50:55 -04:00
Bruce MacDonald
5d71bda478
close llm on interrupt ( #577 )
2023-09-22 19:41:52 +01:00
Michael Yang
82f5b66c01
register HEAD /api/tags
2023-09-21 16:38:03 -07:00
Michael Yang
c986694367
fix HEAD / request
...
HEAD request should respond like their GET counterparts except without a
response body.
2023-09-21 16:35:58 -07:00
Bruce MacDonald
4cba75efc5
remove tmp directories created by previous servers ( #559 )
...
* remove tmp directories created by previous servers
* clean up on server stop
* Update routes.go
* Update server/routes.go
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
* create top-level temp ollama dir
* check file exists before creating
---------
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-21 20:38:49 +01:00
Michael Yang
1fabba474b
refactor default allow origins
...
this should be less error prone
2023-09-21 09:42:25 -07:00
Bruce MacDonald
1255bc9b45
only package 11.8 runner
2023-09-20 20:00:41 +01:00
Patrick Devine
80dd44e80a
Cmd changes ( #541 )
2023-09-18 12:26:56 -07:00
Bruce MacDonald
f221637053
first pass at linux gpu support ( #454 )
...
* linux gpu support
* handle multiple gpus
* add cuda docker image (#488 )
---------
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 11:04:35 -04:00
Patrick Devine
e7e91cd71c
add autoprune to remove unused layers ( #491 )
2023-09-11 11:46:35 -07:00
Patrick Devine
790d24eb7b
add show command ( #474 )
2023-09-06 11:04:17 -07:00
Michael Yang
681f3c4c42
fix num_keep
2023-09-03 17:47:49 -04:00
Michael Yang
eeb40a672c
fix list models for windows
2023-08-31 09:47:10 -04:00
Michael Yang
0f541a0367
s/ListResponseModel/ModelResponse/
2023-08-31 09:47:10 -04:00
Bruce MacDonald
42998d797d
subprocess llama.cpp server ( #401 )
...
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
Patrick Devine
8bbff2df98
add model IDs ( #439 )
2023-08-28 20:50:24 -07:00
Michael Yang
95187d7e1e
build release mode
2023-08-22 09:52:43 -07:00
Jeffrey Morgan
a9f6c56652
fix FROM
instruction erroring when referring to a file
2023-08-22 09:39:42 -07:00
Ryan Baker
0a892419ad
Strip protocol from model path ( #377 )
2023-08-21 21:56:56 -07:00
Bruce MacDonald
326de48930
use loaded llm for embeddings
2023-08-15 10:50:54 -03:00
Patrick Devine
d9cf18e28d
add maximum retries when pushing ( #334 )
2023-08-11 15:41:55 -07:00
Michael Yang
6517bcc53c
Merge pull request #290 from jmorganca/add-adapter-layers
...
implement loading ggml lora adapters through the modelfile
2023-08-10 17:23:01 -07:00
Michael Yang
6a6828bddf
Merge pull request #167 from jmorganca/decode-ggml
...
partial decode ggml bin for more info
2023-08-10 17:22:40 -07:00
Jeffrey Morgan
040a5b9750
clean up cli flags
2023-08-10 09:27:03 -07:00
Michael Yang
6de5d032e1
implement loading ggml lora adapters through the modelfile
2023-08-10 09:23:39 -07:00
Michael Yang
fccf8d179f
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00
Bruce MacDonald
4b3507f036
embeddings endpoint
...
Co-Authored-By: Jeffrey Morgan <jmorganca@gmail.com>
2023-08-10 11:45:57 -04:00
Bruce MacDonald
868e3b31c7
allow for concurrent pulls of the same files
2023-08-09 11:31:54 -04:00
Bruce MacDonald
09d8bf6730
fix build errors
2023-08-09 10:45:57 -04:00
Bruce MacDonald
7a5f3616fd
embed text document in modelfile
2023-08-09 10:26:19 -04:00
Jeffrey Morgan
cff002b824
use content type application/x-ndjson
for streaming responses
2023-08-08 21:38:10 -07:00
Jeffrey Morgan
a027a7dd65
add 0.0.0.0
as an allowed origin by default
...
Fixes #282
2023-08-08 13:39:50 -07:00
Bruce MacDonald
21ddcaa1f1
pr comments
...
- default to embeddings enabled
- move embedding logic for loaded model to request
- allow embedding full directory
- close llm on reload
2023-08-08 13:49:37 -04:00