ollama/llm
Bernhard M. Wiedemann 76e5d9ec88 Omit build date from gzip headers
See https://reproducible-builds.org/ for why this is good.

This patch was done while working on reproducible builds for openSUSE.
2024-02-29 16:48:19 +01:00
..
ext_server update llama.cpp submodule to 66c1968f7 (#2618) 2024-02-20 17:42:31 -05:00
generate Omit build date from gzip headers 2024-02-29 16:48:19 +01:00
llama.cpp@b11a93df41 Bump llama.cpp to b2276 2024-02-26 16:49:24 -08:00
patches update llama.cpp submodule to 66c1968f7 (#2618) 2024-02-20 17:42:31 -05:00
dyn_ext_server.c Switch to local dlopen symbols 2024-01-19 11:37:02 -08:00
dyn_ext_server.go update llama.cpp submodule to 66c1968f7 (#2618) 2024-02-20 17:42:31 -05:00
dyn_ext_server.h Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
ggml.go add gguf file types (#2532) 2024-02-20 19:06:29 -05:00
gguf.go add gguf file types (#2532) 2024-02-20 19:06:29 -05:00
llama.go use llm.ImageData 2024-01-31 19:13:48 -08:00
llm.go Ensure the libraries are present 2024-02-07 17:27:49 -08:00
payload_common.go Detect AMD GPU info via sysfs and block old cards 2024-02-12 08:19:41 -08:00
payload_darwin_amd64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_darwin_arm64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_linux.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_test.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_windows.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00