ollama/llm
Jeffrey Morgan 2c6e8f5248
Update submodule to 6efb8eb30e7025b168f3fda3ff83b9b386428ad6 (#1885)
* update submodule to `6efb8eb30e7025b168f3fda3ff83b9b386428ad6`
* unblock condition variable in `update_slots` when closing server
2024-01-10 16:48:38 -05:00
..
ext_server Update submodule to 6efb8eb30e7025b168f3fda3ff83b9b386428ad6 (#1885) 2024-01-10 16:48:38 -05:00
generate clean up cmake build directory when cross compiling macOS builds 2024-01-09 17:13:56 -05:00
llama.cpp@6efb8eb30e Update submodule to 6efb8eb30e7025b168f3fda3ff83b9b386428ad6 (#1885) 2024-01-10 16:48:38 -05:00
dynamic_shim.c Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
dynamic_shim.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
ext_server_common.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
ext_server_default.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
ext_server_windows.go fix windows build 2024-01-08 20:04:01 -05:00
ggml.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
gguf.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
llama.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
llm.go use runner if cuda alloc won't fit 2024-01-09 00:44:34 -05:00
shim_darwin.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
shim_ext_server.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
shim_ext_server_linux.go Code shuffle to clean up the llm dir 2024-01-04 12:12:05 -08:00
shim_ext_server_windows.go Code shuffle to clean up the llm dir 2024-01-04 12:12:05 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00