Daniel Hiltgen
f397e0e988
Move hub auth out to new package
2024-02-15 05:56:45 +00:00
Daniel Hiltgen
9da9e8fb72
Move Mac App to a new dir
2024-02-15 05:56:45 +00:00
Patrick Devine
42e77e2a69
handle race condition while setting raw mode in windows ( #2509 )
2024-02-14 21:28:35 -08:00
Jeffrey Morgan
9241a29336
Revert "Revert "bump submodule to 6c00a06
( #2479 )"" ( #2485 )
...
This reverts commit 6920964b87
.
2024-02-13 18:18:41 -08:00
Jeffrey Morgan
f7231ad9ad
set shutting_down
to false
once shutdown is complete ( #2484 )
2024-02-13 17:48:41 -08:00
Jeffrey Morgan
6920964b87
Revert "bump submodule to 6c00a06
( #2479 )"
...
This reverts commit 2f9ed52bbd
.
2024-02-13 17:23:05 -08:00
Jeffrey Morgan
2f9ed52bbd
bump submodule to 6c00a06
( #2479 )
2024-02-13 17:12:42 -08:00
bnorick
caf2b13c10
Fix infinite keep_alive ( #2480 )
2024-02-13 15:40:32 -08:00
lebrunel
1d263449ff
Update README.md to include link to Ollama-ex Elixir library ( #2477 )
2024-02-13 11:40:44 -08:00
Jeffrey Morgan
48a273f80b
Fix issues with templating prompt in chat mode ( #2460 )
2024-02-12 15:06:57 -08:00
Daniel Hiltgen
939c60473f
Merge pull request #2422 from dhiltgen/better_kill
...
More robust shutdown
2024-02-12 14:05:06 -08:00
Jeffrey Morgan
f76ca04f9e
update submodule to 099afc6
( #2468 )
2024-02-12 14:01:16 -08:00
Daniel Hiltgen
76b8728f0c
Merge pull request #2465 from dhiltgen/block_rocm_pre_9
...
Detect AMD GPU info via sysfs and block old cards
2024-02-12 12:41:43 -08:00
Jeffrey Morgan
1f9078d6ae
Check image filetype in api handlers ( #2467 )
2024-02-12 11:16:20 -08:00
Daniel Hiltgen
6d84f07505
Detect AMD GPU info via sysfs and block old cards
...
This wires up some new logic to start using sysfs to discover AMD GPU
information and detects old cards we can't yet support so we can fallback to CPU mode.
2024-02-12 08:19:41 -08:00
Jeffrey Morgan
26b13fc33c
patch: always add token to cache_tokens ( #2459 )
2024-02-12 08:10:16 -08:00
Jeffrey Morgan
1c8435ffa9
Update domain name references in docs and install script ( #2435 )
2024-02-09 15:19:30 -08:00
Daniel Hiltgen
6680761596
Shutdown faster
...
Make sure that when a shutdown signal comes, we shutdown quickly instead
of waiting for a potentially long exchange to wrap up.
2024-02-08 22:22:50 -08:00
Jeffrey Morgan
42b797ed9c
Update openai.md
2024-02-08 15:03:23 -05:00
Jeffrey Morgan
336aa43f3c
Update openai.md
2024-02-08 12:48:28 -05:00
Daniel Hiltgen
69f392c9b7
Merge pull request #2403 from dhiltgen/handle_tmp_cleanup
...
Ensure the libraries are present
2024-02-07 17:55:31 -08:00
Daniel Hiltgen
a1dfab43b9
Ensure the libraries are present
...
When we store our libraries in a temp dir, a reaper might clean
them when we are idle, so make sure to check for them before
we reload.
2024-02-07 17:27:49 -08:00
Jeffrey Morgan
a0a199b108
Fix hanging issue when sending empty content ( #2399 )
2024-02-07 19:30:33 -05:00
Jeffrey Morgan
ab0d37fde4
Update openai.md
2024-02-07 17:25:33 -05:00
Jeffrey Morgan
14e71350c8
Update openai.md
2024-02-07 17:25:24 -05:00
Jeffrey Morgan
453f572f83
Initial OpenAI /v1/chat/completions
API compatibility ( #2376 )
2024-02-07 17:24:29 -05:00
Daniel Hiltgen
c9dfa6e571
Merge pull request #2377 from dhiltgen/bump_llamacpp
...
Bump llama.cpp to b2081
2024-02-07 12:04:38 -08:00
Michael Yang
3dcbcd367d
Merge pull request #2394 from ollama/mxyng/fix-error-response
2024-02-07 11:47:31 -08:00
Michael Yang
e805ac1d59
fix response on token error
2024-02-07 11:05:49 -08:00
Michael Yang
b9229ffca5
Merge pull request #2378 from ollama/mxyng/runners
...
runners
2024-02-06 13:49:58 -08:00
Michael Yang
46c847c4ad
enable rocm builds
2024-02-06 13:36:13 -08:00
Michael Yang
92b1a21f79
use linux runners
2024-02-06 13:36:04 -08:00
Daniel Hiltgen
de76b95dd4
Bump llama.cpp to b2081
2024-02-06 12:06:43 -08:00
Michael Yang
59ec837ef6
Merge pull request #2374 from ollama/mxyng/rocm-builds
...
disable rocm builds
2024-02-06 09:41:02 -08:00
Michael Yang
f06b99a461
disable rocm builds
2024-02-06 09:29:42 -08:00
Bruce MacDonald
128fce5495
docs: keep_alive ( #2258 )
2024-02-06 11:00:05 -05:00
Daniel Hiltgen
27aa2d4a19
Merge pull request #1849 from mraiser/main
...
Accomodate split cuda lib dir
2024-02-05 16:01:16 -08:00
Jeffrey Morgan
b9f91a0b36
Update import instructions to use convert and quantize tooling from llama.cpp submodule ( #2247 )
2024-02-05 00:50:44 -05:00
Erik S
b538dc3858
Add llm-ollama plugin for Datasette's LLM CLI to README ( #2340 )
...
Co-authored-by: Erik Sp <git@aschwa.com>
2024-02-03 15:40:50 -08:00
Jeffrey Morgan
f0e9496c85
Update api.md
2024-02-02 12:17:24 -08:00
Jeffrey Morgan
09a6f76f4c
fix error on ollama run
with a non-existent model
2024-02-01 23:11:52 -08:00
Jeffrey Morgan
e135167484
Add multimodel support to ollama run
in noninteractive mopde ( #2317 )
2024-02-01 21:33:06 -08:00
Jeffrey Morgan
38296ab352
clear previous images when submitting an image to ollama run
( #2316 )
2024-02-01 21:30:26 -08:00
Daniel Hiltgen
f43dea68d1
Merge pull request #2318 from dhiltgen/more_clean
...
Harden generate patching model
2024-02-01 20:41:29 -08:00
Daniel Hiltgen
e1f50377f4
Harden generate patching model
...
Only apply patches if we have any, and make sure to cleanup
every file we patched at the end to leave the tree clean
2024-02-01 19:34:36 -08:00
Jeffrey Morgan
7913104527
Improvements to ollama run
for multimodal models ( #2300 )
2024-02-01 17:09:51 -08:00
Michael Yang
bfbf2f7cf7
Merge pull request #2296 from ollama/mxyng/img-tags
...
append image tags to user content
2024-02-01 13:16:59 -08:00
Michael Yang
fe3cbd014f
Merge pull request #2298 from ollama/mxyng/debug-prompt
...
structured debug prompt
2024-02-01 13:16:49 -08:00
Michael Yang
3d6f48507a
structured debug prompt
2024-02-01 11:56:28 -08:00
Michael Yang
f3761405c8
use image id
2024-02-01 11:52:42 -08:00