Jeffrey Morgan
f5ca7f8c8e
add license in file header for vendored llama.cpp code ( #3351 )
2024-03-26 16:23:23 -04:00
Jeffrey Morgan
856b8ec131
remove need for $VSINSTALLDIR
since build will fail if ninja
cannot be found ( #3350 )
2024-03-26 16:23:16 -04:00
Patrick Devine
1b272d5bcd
change github.com/jmorganca/ollama
to github.com/ollama/ollama
( #3347 )
2024-03-26 13:04:17 -07:00
Christophe Dervieux
29715dbca7
malformed markdown link ( #3358 )
2024-03-26 10:46:36 -04:00
Daniel Hiltgen
54a028d07f
Merge pull request #3356 from dhiltgen/fix_arm_linux
...
Switch runner for final release job
2024-03-25 20:54:46 -07:00
Daniel Hiltgen
f83e4db365
Switch runner for final release job
...
The manifest and tagging step use a lot of disk space
2024-03-25 20:51:40 -07:00
Daniel Hiltgen
3b5866a233
Merge pull request #3353 from dhiltgen/fix_arm_linux
...
Use Rocky Linux Vault to get GCC 10.2 installed
2024-03-25 19:38:56 -07:00
Daniel Hiltgen
b8c2be6142
Use Rocky Linux Vault to get GCC 10.2 installed
...
This should hopefully only be a temporary workaround until Rocky 8
picks up GCC 10.4 which fixes the NVCC bug
2024-03-25 19:18:50 -07:00
Daniel Hiltgen
e0319bd78d
Revert "Switch arm cuda base image to centos 7"
...
This reverts commit 5dacc1ebe8
.
2024-03-25 19:01:11 -07:00
Daniel Hiltgen
b31ed7f031
Merge pull request #3352 from dhiltgen/fix_arm_linux
...
Switch arm cuda base image to centos 7
2024-03-25 16:13:10 -07:00
Daniel Hiltgen
5dacc1ebe8
Switch arm cuda base image to centos 7
...
We had started using rocky linux 8, but they've updated to GCC 10.3,
which breaks NVCC. 10.2 is compatible (or 10.4, but that's not
available from rocky linux 8 repos yet)
2024-03-25 15:57:08 -07:00
Daniel Hiltgen
c2712b5566
Merge pull request #3348 from dhiltgen/bump_llamacpp
...
Bump llama.cpp to b2527
2024-03-25 14:15:53 -07:00
Daniel Hiltgen
8091ef2eeb
Bump llama.cpp to b2527
2024-03-25 13:47:44 -07:00
Jeffrey Morgan
f38b705dc7
Fix ROCm link in development.md
2024-03-25 16:32:44 -04:00
Daniel Hiltgen
560be5e0b6
Merge pull request #3308 from dhiltgen/bump_more
...
Bump llama.cpp to b2510
2024-03-25 12:56:12 -07:00
Daniel Hiltgen
4a1c76b3aa
Merge pull request #3331 from dhiltgen/integration_testing
...
Integration tests conditionally pull
2024-03-25 12:48:51 -07:00
Daniel Hiltgen
28a64e23ca
Merge pull request #2279 from remy415/main
...
Add support for libcudart.so for CUDA devices (Adds Jetson support)
2024-03-25 12:46:28 -07:00
Niclas Pahlfer
92d74e2f59
adds ooo to community integrations ( #1623 )
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 15:08:33 -04:00
Herval Freire
6f8f57dd1d
Add cliobot to ollama supported list ( #1873 )
...
* Update README.md
* Update README.md
---------
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 15:07:19 -04:00
Chenhe Gu
b2fa68b0ea
Add Dify.AI to community integrations ( #1944 )
...
Dify.AI is a model-agnostic LLMOps platform for building and managing LLM applications.
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 15:06:39 -04:00
Marco Antônio
3767d5ef0d
enh: add ollero.nvim to community applications ( #1905 )
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 15:06:08 -04:00
Ani Betts
9fed85bc8b
Add typechat-cli to Terminal apps ( #2428 )
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 15:05:04 -04:00
Miguel
4501bc0913
add new Web & Desktop link in readme for alpaca webui ( #2881 )
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 15:00:18 -04:00
Danny Avila
57ba519e63
Add LibreChat to Web & Desktop Apps ( #2918 )
2024-03-25 14:59:18 -04:00
enoch1118
d98d322d24
Add Community Integration: OllamaGUI ( #2927 )
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 14:58:28 -04:00
fly2tomato
0c3ec74cf1
Add Community Integration: OpenAOE ( #2946 )
...
* Update README.md
* Update README.md
---------
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 14:57:40 -04:00
tusharhero
42ae8359fa
docs: Add AI telegram to Community Integrations. ( #3033 )
2024-03-25 14:56:42 -04:00
Timothy Carambat
e4b76dfb76
docs: Add AnythingLLM to README as integration option ( #3145 )
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 14:54:48 -04:00
Jikku Jose
2c56517494
Add Saddle ( #3178 )
2024-03-25 14:54:09 -04:00
Yusuf Can Bayrak
cfbc1b152b
tlm added to README.md terminal section. ( #3274 )
2024-03-25 14:53:26 -04:00
RAPID ARCHITECT
9305ac1b2e
Update README.md ( #3288 )
...
Added Ollama Basic chat based on hyperdiv
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2024-03-25 14:52:25 -04:00
drazdra
45d6292959
Update README.md ( #3338 )
...
adding drazdra/ollama-chats to the list of UI :)
2024-03-25 14:50:51 -04:00
Blake Mizerany
22921a3969
doc: specify ADAPTER is optional ( #3333 )
2024-03-25 09:43:19 -07:00
Daniel Hiltgen
7b6cbc10ec
Integration tests conditionally pull
...
If images aren't present, pull them.
Also fixes the expected responses
2024-03-25 08:57:45 -07:00
Jeremy
dfc6721b20
add support for libcudart.so for CUDA devices (adds Jetson support)
2024-03-25 11:07:44 -04:00
Blake Mizerany
acfa2b9422
llm: prevent race appending to slice ( #3320 )
2024-03-24 11:35:54 -07:00
Daniel Hiltgen
2c390a73ac
Merge pull request #3282 from dhiltgen/gpu_docs
...
Add docs for GPU selection and nvidia uvm workaround
2024-03-24 19:15:03 +01:00
Daniel Hiltgen
3e30c75f3e
Bump llama.cpp to b2510
2024-03-23 19:55:56 +01:00
Eddú Meléndez Gonzales
7e430ff352
Add Testcontainers into Libraries section ( #3291 )
...
Testcontainers provides a module for Ollama.
2024-03-23 19:55:25 +01:00
Daniel Hiltgen
1784113ef5
Merge pull request #3309 from dhiltgen/integration_testing
...
Revamp go based integration tests
2024-03-23 19:08:49 +01:00
Daniel Hiltgen
949b6c01e0
Revamp go based integration tests
...
This uplevels the integration tests to run the server which can allow
testing an existing server, or a remote server.
2024-03-23 14:24:18 +01:00
jmorganca
38daf0a252
rename .gitattributes
2024-03-23 12:40:31 +01:00
Daniel Hiltgen
43799532c1
Bump llama.cpp to b2474
...
The release just before ggml-cuda.cu refactoring
2024-03-23 09:54:56 +01:00
Daniel Hiltgen
d8fdbfd8da
Add docs for GPU selection and nvidia uvm workaround
2024-03-21 11:52:54 +01:00
Bruce MacDonald
a5ba0fcf78
doc: faq gpu compatibility ( #3142 )
2024-03-21 05:21:34 -04:00
Jeffrey Morgan
3a30bf56dc
Update faq.md
2024-03-20 17:48:39 +01:00
Daniel Hiltgen
a1c0a48524
Merge pull request #3122 from dhiltgen/better_tmp_cleanup
...
Better tmpdir cleanup
2024-03-20 16:28:03 +01:00
Daniel Hiltgen
74788b487c
Better tmpdir cleanup
...
If expanding the runners fails, don't leave a corrupt/incomplete payloads dir
We now write a pid file out to the tmpdir, which allows us to scan for stale tmpdirs
and remove this as long as there isn't still a process running.
2024-03-20 16:03:19 +01:00
Jeffrey Morgan
7ed3e94105
Update faq.md
2024-03-18 10:24:39 +01:00
jmorganca
2297ad39da
update faq.md
2024-03-18 10:17:59 +01:00