Commit graph

596 commits

Author SHA1 Message Date
Michael Yang
05e08d2310 return more info in generate response 2023-07-13 09:37:32 -07:00
Michael Yang
31590284a7 fix route 2023-07-12 19:21:49 -07:00
Michael Yang
2666d3c206 fix pull race 2023-07-12 19:07:23 -07:00
Michael Yang
0944b01e7d pull fixes 2023-07-12 09:55:07 -07:00
Michael Yang
a806b03f62 no errgroup 2023-07-11 14:58:10 -07:00
Michael Yang
948323fa78 rename partial file 2023-07-11 13:50:26 -07:00
Michael Yang
e243329e2e check api status 2023-07-11 13:42:05 -07:00
Michael Yang
2a66a1164a common stream producer 2023-07-11 13:42:05 -07:00
Michael Yang
fd4792ec56 call llama.cpp directly from go 2023-07-11 11:59:18 -07:00
Jeffrey Morgan
a3ec1ec2a0 consistent error handling for pull and generate 2023-07-10 21:34:15 -07:00
Michael Yang
edba935d67 return error in generate response 2023-07-10 13:30:10 -07:00
Bruce MacDonald
f5e2e150b8 allow overriding default generate options 2023-07-10 20:58:02 +02:00
Jeffrey Morgan
74e92d1258 add basic / route for server 2023-07-07 23:46:15 -04:00
Bruce MacDonald
f533f85d44 pr feedback
- move error check to api client pull
- simplify error check in generate
- return nil on any pull error
2023-07-07 17:12:02 -04:00
Bruce MacDonald
61dd87bd90 if directory cannot be resolved, do not fail 2023-07-07 15:27:43 -04:00
Bruce MacDonald
b24be8c6b3 update directory url 2023-07-07 15:13:41 -04:00
Michael Yang
053739d19f no prompt on empty line 2023-07-07 11:01:44 -07:00
Patrick Devine
3f1b7177f2 pass model and predict options 2023-07-07 09:34:05 -07:00
Michael Yang
b0618a466e generate progress 2023-07-06 17:07:40 -07:00
Michael Yang
c4b9e84945 progress 2023-07-06 17:07:40 -07:00
Michael Yang
15c114decb fix prompt templates 2023-07-06 17:03:18 -07:00
Michael Yang
0637632258 simple pull response 2023-07-06 16:34:44 -04:00
Michael Yang
dd960d1d5e update generate response 2023-07-06 16:34:44 -04:00
Bruce MacDonald
d436d51c78 clean up model pull 2023-07-06 16:34:44 -04:00
Bruce MacDonald
c9f45abef3 resumable downloads 2023-07-06 16:34:44 -04:00
Michael Yang
9b8a456c7d embed templates 2023-07-06 16:34:44 -04:00
Bruce MacDonald
7cf5905063 display pull progress 2023-07-06 16:34:44 -04:00
Michael Yang
580fe8951c free llama model 2023-07-06 16:34:44 -04:00
Michael Yang
68e6b4550c use prompt templates 2023-07-06 16:34:44 -04:00
Bruce MacDonald
a6494f8211 pull models 2023-07-06 16:34:44 -04:00
Michael Yang
1b7183c5a1 enable metal gpu acceleration
ggml-metal.metal must be in the same directory as the ollama binary
otherwise llama.cpp will not be able to find it and load it.

1. go generate llama/llama_metal.go
2. go build .
3. ./ollama serve
2023-07-06 16:34:44 -04:00
Jeffrey Morgan
0998d4f0a4 remove debug print statements 2023-07-06 16:34:44 -04:00
Bruce MacDonald
8ea5e5e147 separate routes 2023-07-06 16:34:44 -04:00
Jeffrey Morgan
fd962a36e5 client updates 2023-07-06 16:34:44 -04:00
Jeffrey Morgan
9164981d72 move prompt templates out of python bindings 2023-07-06 16:34:44 -04:00
Jeffrey Morgan
6093a88c1a add llama.cpp go bindings 2023-07-06 16:34:44 -04:00
Jeffrey Morgan
76cb60d496 wip go engine
Co-authored-by: Patrick Devine <pdevine@sonic.net>
2023-07-06 16:34:44 -04:00
Jeffrey Morgan
b361fa72ec reorganize directories 2023-06-25 13:08:03 -04:00
Jeffrey Morgan
d3709f85b5 build server into desktop app 2023-06-25 00:30:02 -04:00
Jeffrey Morgan
369108e1ad Add dependencies to requirements.txt 2023-06-24 21:13:26 -04:00
Michael Chiang
5142ba2dad
Update README.md 2023-06-23 22:51:54 -04:00
Bruce MacDonald
c5bafaff54 package server with client 2023-06-23 18:38:22 -04:00
Bruce MacDonald
f0eee3faa0 build server executable 2023-06-23 17:23:30 -04:00
Bruce MacDonald
ebec1c61db load and unload model endpoints 2023-06-23 14:47:57 -04:00
Bruce MacDonald
0758cb2d4b llama server wrapper 2023-06-23 13:10:13 -04:00
Jeffrey Morgan
8fa91332fa initial commit 2023-06-22 18:31:40 -04:00