ollama/llm
Daniel Hiltgen ddbfa6fe31 Fix CPU only builds
Go embed doesn't like when there's no matching files, so put
a dummy placeholder in to allow building without any GPU support
If no "server" library is found, it's safely ignored at runtime.
2024-01-03 16:08:34 -08:00
..
llama.cpp Fix CPU only builds 2024-01-03 16:08:34 -08:00
dynamic_shim.c Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
dynamic_shim.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
ext_server_common.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
ext_server_default.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
ext_server_windows.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
ggml.go deprecate ggml 2023-12-19 09:05:46 -08:00
gguf.go remove per-model types 2023-12-11 09:40:21 -08:00
llama.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
llm.go Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
shim_darwin.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
shim_ext_server.go Fix CPU only builds 2024-01-03 16:08:34 -08:00
shim_ext_server_linux.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
shim_ext_server_windows.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00