Jeffrey Morgan
|
42b797ed9c
|
Update openai.md
|
2024-02-08 15:03:23 -05:00 |
|
Jeffrey Morgan
|
336aa43f3c
|
Update openai.md
|
2024-02-08 12:48:28 -05:00 |
|
Daniel Hiltgen
|
69f392c9b7
|
Merge pull request #2403 from dhiltgen/handle_tmp_cleanup
Ensure the libraries are present
|
2024-02-07 17:55:31 -08:00 |
|
Daniel Hiltgen
|
a1dfab43b9
|
Ensure the libraries are present
When we store our libraries in a temp dir, a reaper might clean
them when we are idle, so make sure to check for them before
we reload.
|
2024-02-07 17:27:49 -08:00 |
|
Jeffrey Morgan
|
a0a199b108
|
Fix hanging issue when sending empty content (#2399)
|
2024-02-07 19:30:33 -05:00 |
|
Jeffrey Morgan
|
ab0d37fde4
|
Update openai.md
|
2024-02-07 17:25:33 -05:00 |
|
Jeffrey Morgan
|
14e71350c8
|
Update openai.md
|
2024-02-07 17:25:24 -05:00 |
|
Jeffrey Morgan
|
453f572f83
|
Initial OpenAI /v1/chat/completions API compatibility (#2376)
|
2024-02-07 17:24:29 -05:00 |
|
Daniel Hiltgen
|
c9dfa6e571
|
Merge pull request #2377 from dhiltgen/bump_llamacpp
Bump llama.cpp to b2081
|
2024-02-07 12:04:38 -08:00 |
|
Michael Yang
|
3dcbcd367d
|
Merge pull request #2394 from ollama/mxyng/fix-error-response
|
2024-02-07 11:47:31 -08:00 |
|
Michael Yang
|
e805ac1d59
|
fix response on token error
|
2024-02-07 11:05:49 -08:00 |
|
Michael Yang
|
b9229ffca5
|
Merge pull request #2378 from ollama/mxyng/runners
runners
|
2024-02-06 13:49:58 -08:00 |
|
Michael Yang
|
46c847c4ad
|
enable rocm builds
|
2024-02-06 13:36:13 -08:00 |
|
Michael Yang
|
92b1a21f79
|
use linux runners
|
2024-02-06 13:36:04 -08:00 |
|
Daniel Hiltgen
|
de76b95dd4
|
Bump llama.cpp to b2081
|
2024-02-06 12:06:43 -08:00 |
|
Michael Yang
|
59ec837ef6
|
Merge pull request #2374 from ollama/mxyng/rocm-builds
disable rocm builds
|
2024-02-06 09:41:02 -08:00 |
|
Michael Yang
|
f06b99a461
|
disable rocm builds
|
2024-02-06 09:29:42 -08:00 |
|
Bruce MacDonald
|
128fce5495
|
docs: keep_alive (#2258)
|
2024-02-06 11:00:05 -05:00 |
|
Daniel Hiltgen
|
27aa2d4a19
|
Merge pull request #1849 from mraiser/main
Accomodate split cuda lib dir
|
2024-02-05 16:01:16 -08:00 |
|
Jeffrey Morgan
|
b9f91a0b36
|
Update import instructions to use convert and quantize tooling from llama.cpp submodule (#2247)
|
2024-02-05 00:50:44 -05:00 |
|
Erik S
|
b538dc3858
|
Add llm-ollama plugin for Datasette's LLM CLI to README (#2340)
Co-authored-by: Erik Sp <git@aschwa.com>
|
2024-02-03 15:40:50 -08:00 |
|
Jeffrey Morgan
|
f0e9496c85
|
Update api.md
|
2024-02-02 12:17:24 -08:00 |
|
Jeffrey Morgan
|
09a6f76f4c
|
fix error on ollama run with a non-existent model
|
2024-02-01 23:11:52 -08:00 |
|
Jeffrey Morgan
|
e135167484
|
Add multimodel support to ollama run in noninteractive mopde (#2317)
|
2024-02-01 21:33:06 -08:00 |
|
Jeffrey Morgan
|
38296ab352
|
clear previous images when submitting an image to ollama run (#2316)
|
2024-02-01 21:30:26 -08:00 |
|
Daniel Hiltgen
|
f43dea68d1
|
Merge pull request #2318 from dhiltgen/more_clean
Harden generate patching model
|
2024-02-01 20:41:29 -08:00 |
|
Daniel Hiltgen
|
e1f50377f4
|
Harden generate patching model
Only apply patches if we have any, and make sure to cleanup
every file we patched at the end to leave the tree clean
|
2024-02-01 19:34:36 -08:00 |
|
Jeffrey Morgan
|
7913104527
|
Improvements to ollama run for multimodal models (#2300)
|
2024-02-01 17:09:51 -08:00 |
|
Michael Yang
|
bfbf2f7cf7
|
Merge pull request #2296 from ollama/mxyng/img-tags
append image tags to user content
|
2024-02-01 13:16:59 -08:00 |
|
Michael Yang
|
fe3cbd014f
|
Merge pull request #2298 from ollama/mxyng/debug-prompt
structured debug prompt
|
2024-02-01 13:16:49 -08:00 |
|
Michael Yang
|
3d6f48507a
|
structured debug prompt
|
2024-02-01 11:56:28 -08:00 |
|
Michael Yang
|
f3761405c8
|
use image id
|
2024-02-01 11:52:42 -08:00 |
|
Michael Yang
|
e49dc9f3d8
|
fix tests
|
2024-02-01 11:48:11 -08:00 |
|
Michael Yang
|
d125510b4b
|
remove image tags
|
2024-02-01 11:32:51 -08:00 |
|
Russell Canfield
|
1ca386aa9e
|
Feature - Add Wingman Extension (#2313)
|
2024-02-01 11:16:24 -08:00 |
|
Michael Yang
|
fb56988014
|
account for image projection in token count
|
2024-02-01 09:50:48 -08:00 |
|
Michael Yang
|
d046bee790
|
use llm.ImageData for chat
|
2024-01-31 19:18:25 -08:00 |
|
Jeffrey Morgan
|
f11bf0740b
|
use llm.ImageData
|
2024-01-31 19:13:48 -08:00 |
|
Michael Yang
|
8450bf66e6
|
trim images
|
2024-01-31 19:13:47 -08:00 |
|
Michael Yang
|
b4e11be8ef
|
append image tags to user content
|
2024-01-31 19:13:10 -08:00 |
|
Bruce MacDonald
|
a896079705
|
preserve last system message from modelfile (#2289)
|
2024-01-31 21:45:01 -05:00 |
|
Michael Yang
|
583950c828
|
Merge pull request #2294 from ollama/mxyng/slog-source
update slog handler options
|
2024-01-31 15:29:11 -08:00 |
|
Michael Yang
|
8ac08a0eec
|
update slog handler options
- consistent format by using text handler for debug and non-debug
- truncate source file to just the file name
|
2024-01-31 15:15:00 -08:00 |
|
Michael Yang
|
60f47be64c
|
Merge pull request #2284 from ollama/mxyng/parse-raw
remove unnecessary parse raw
|
2024-01-31 09:40:48 -08:00 |
|
Daniel Hiltgen
|
6e56077ada
|
Merge pull request #2263 from dhiltgen/bump_llamacpp
Bump llama.cpp to b1999
|
2024-01-31 08:39:41 -08:00 |
|
Hoang Nguyen
|
98ae9467bb
|
Added MindMac to Community Integrations -> Web & Desktop section (#1957)
|
2024-01-31 07:48:37 -08:00 |
|
Richard Macarthy
|
b7a24af083
|
Add twinny vscode extension to Extensions and Plugins (#1950)
|
2024-01-31 06:25:06 -08:00 |
|
Michael Yang
|
c8b1f2369e
|
remove unnecessary parse raw
|
2024-01-30 17:00:53 -08:00 |
|
Daniel Hiltgen
|
72b12c3be7
|
Bump llama.cpp to b1999
This requires an upstream change to support graceful termination,
carried as a patch.
|
2024-01-30 16:52:12 -08:00 |
|
Bruce MacDonald
|
0632dff3f8
|
trim chat prompt based on llm context size (#1963)
|
2024-01-30 15:59:29 -05:00 |
|