Michael Yang
|
e805ac1d59
|
fix response on token error
|
2024-02-07 11:05:49 -08:00 |
|
Michael Yang
|
b9229ffca5
|
Merge pull request #2378 from ollama/mxyng/runners
runners
|
2024-02-06 13:49:58 -08:00 |
|
Michael Yang
|
46c847c4ad
|
enable rocm builds
|
2024-02-06 13:36:13 -08:00 |
|
Michael Yang
|
92b1a21f79
|
use linux runners
|
2024-02-06 13:36:04 -08:00 |
|
Daniel Hiltgen
|
de76b95dd4
|
Bump llama.cpp to b2081
|
2024-02-06 12:06:43 -08:00 |
|
Michael Yang
|
59ec837ef6
|
Merge pull request #2374 from ollama/mxyng/rocm-builds
disable rocm builds
|
2024-02-06 09:41:02 -08:00 |
|
Michael Yang
|
f06b99a461
|
disable rocm builds
|
2024-02-06 09:29:42 -08:00 |
|
Bruce MacDonald
|
128fce5495
|
docs: keep_alive (#2258)
|
2024-02-06 11:00:05 -05:00 |
|
Daniel Hiltgen
|
27aa2d4a19
|
Merge pull request #1849 from mraiser/main
Accomodate split cuda lib dir
|
2024-02-05 16:01:16 -08:00 |
|
Jeffrey Morgan
|
b9f91a0b36
|
Update import instructions to use convert and quantize tooling from llama.cpp submodule (#2247)
|
2024-02-05 00:50:44 -05:00 |
|
Erik S
|
b538dc3858
|
Add llm-ollama plugin for Datasette's LLM CLI to README (#2340)
Co-authored-by: Erik Sp <git@aschwa.com>
|
2024-02-03 15:40:50 -08:00 |
|
Jeffrey Morgan
|
f0e9496c85
|
Update api.md
|
2024-02-02 12:17:24 -08:00 |
|
Jeffrey Morgan
|
09a6f76f4c
|
fix error on ollama run with a non-existent model
|
2024-02-01 23:11:52 -08:00 |
|
Jeffrey Morgan
|
e135167484
|
Add multimodel support to ollama run in noninteractive mopde (#2317)
|
2024-02-01 21:33:06 -08:00 |
|
Jeffrey Morgan
|
38296ab352
|
clear previous images when submitting an image to ollama run (#2316)
|
2024-02-01 21:30:26 -08:00 |
|
Daniel Hiltgen
|
f43dea68d1
|
Merge pull request #2318 from dhiltgen/more_clean
Harden generate patching model
|
2024-02-01 20:41:29 -08:00 |
|
Daniel Hiltgen
|
e1f50377f4
|
Harden generate patching model
Only apply patches if we have any, and make sure to cleanup
every file we patched at the end to leave the tree clean
|
2024-02-01 19:34:36 -08:00 |
|
Jeffrey Morgan
|
7913104527
|
Improvements to ollama run for multimodal models (#2300)
|
2024-02-01 17:09:51 -08:00 |
|
Michael Yang
|
bfbf2f7cf7
|
Merge pull request #2296 from ollama/mxyng/img-tags
append image tags to user content
|
2024-02-01 13:16:59 -08:00 |
|
Michael Yang
|
fe3cbd014f
|
Merge pull request #2298 from ollama/mxyng/debug-prompt
structured debug prompt
|
2024-02-01 13:16:49 -08:00 |
|
Michael Yang
|
3d6f48507a
|
structured debug prompt
|
2024-02-01 11:56:28 -08:00 |
|
Michael Yang
|
f3761405c8
|
use image id
|
2024-02-01 11:52:42 -08:00 |
|
Michael Yang
|
e49dc9f3d8
|
fix tests
|
2024-02-01 11:48:11 -08:00 |
|
Michael Yang
|
d125510b4b
|
remove image tags
|
2024-02-01 11:32:51 -08:00 |
|
Russell Canfield
|
1ca386aa9e
|
Feature - Add Wingman Extension (#2313)
|
2024-02-01 11:16:24 -08:00 |
|
Michael Yang
|
fb56988014
|
account for image projection in token count
|
2024-02-01 09:50:48 -08:00 |
|
Michael Yang
|
d046bee790
|
use llm.ImageData for chat
|
2024-01-31 19:18:25 -08:00 |
|
Jeffrey Morgan
|
f11bf0740b
|
use llm.ImageData
|
2024-01-31 19:13:48 -08:00 |
|
Michael Yang
|
8450bf66e6
|
trim images
|
2024-01-31 19:13:47 -08:00 |
|
Michael Yang
|
b4e11be8ef
|
append image tags to user content
|
2024-01-31 19:13:10 -08:00 |
|
Bruce MacDonald
|
a896079705
|
preserve last system message from modelfile (#2289)
|
2024-01-31 21:45:01 -05:00 |
|
Michael Yang
|
583950c828
|
Merge pull request #2294 from ollama/mxyng/slog-source
update slog handler options
|
2024-01-31 15:29:11 -08:00 |
|
Michael Yang
|
8ac08a0eec
|
update slog handler options
- consistent format by using text handler for debug and non-debug
- truncate source file to just the file name
|
2024-01-31 15:15:00 -08:00 |
|
Michael Yang
|
60f47be64c
|
Merge pull request #2284 from ollama/mxyng/parse-raw
remove unnecessary parse raw
|
2024-01-31 09:40:48 -08:00 |
|
Daniel Hiltgen
|
6e56077ada
|
Merge pull request #2263 from dhiltgen/bump_llamacpp
Bump llama.cpp to b1999
|
2024-01-31 08:39:41 -08:00 |
|
Hoang Nguyen
|
98ae9467bb
|
Added MindMac to Community Integrations -> Web & Desktop section (#1957)
|
2024-01-31 07:48:37 -08:00 |
|
Richard Macarthy
|
b7a24af083
|
Add twinny vscode extension to Extensions and Plugins (#1950)
|
2024-01-31 06:25:06 -08:00 |
|
Michael Yang
|
c8b1f2369e
|
remove unnecessary parse raw
|
2024-01-30 17:00:53 -08:00 |
|
Daniel Hiltgen
|
72b12c3be7
|
Bump llama.cpp to b1999
This requires an upstream change to support graceful termination,
carried as a patch.
|
2024-01-30 16:52:12 -08:00 |
|
Bruce MacDonald
|
0632dff3f8
|
trim chat prompt based on llm context size (#1963)
|
2024-01-30 15:59:29 -05:00 |
|
Maximilian Weber
|
509e2dec8a
|
Update README.md (#2252)
Added - [Ollama for R - rollama](https://github.com/JBGruber/rollama) in Libraries in README.md
|
2024-01-30 11:56:51 -08:00 |
|
Daniel Hiltgen
|
78a48de804
|
Merge pull request #2256 from dhiltgen/container_logs
Add container hints for troubleshooting
|
2024-01-30 08:12:48 -08:00 |
|
Daniel Hiltgen
|
e7dbb00331
|
Add container hints for troubleshooting
Some users are new to containers and unsure where the server logs go
|
2024-01-29 08:53:41 -08:00 |
|
Marc Raiser
|
c3f9538636
|
remove default.nix
|
2024-01-29 00:05:07 -05:00 |
|
Jeffrey Morgan
|
2e06ed01d5
|
remove unknown CPPFLAGS option
|
2024-01-28 17:51:23 -08:00 |
|
Daniel Hiltgen
|
4072b5879b
|
Merge pull request #2246 from dhiltgen/reject_cuda_without_avx
Don't disable GPUs on arm without AVX
|
2024-01-28 16:26:55 -08:00 |
|
Daniel Hiltgen
|
15562e887d
|
Don't disable GPUs on arm without AVX
AVX is an x86 feature, so ARM should be excluded from
the check.
|
2024-01-28 15:22:38 -08:00 |
|
Jeffrey Morgan
|
f2245c7c77
|
print prompt with OLLAMA_DEBUG=1 (#2245)
|
2024-01-28 15:22:35 -08:00 |
|
Jeffrey Morgan
|
e4b9b72f2a
|
Do not repeat system prompt for chat templating (#2241)
|
2024-01-28 14:15:56 -08:00 |
|
Daniel Hiltgen
|
311f8e0c3f
|
Merge pull request #2243 from dhiltgen/harden_zero_gpus
Harden for zero detected GPUs
|
2024-01-28 13:30:44 -08:00 |
|