ollama/llm
Daniel Hiltgen 7427fa1387 Fix up the CPU fallback selection
The memory changes and multi-variant change had some merge
glitches I missed.  This fixes them so we actually get the cpu llm lib
and best variant for the given system.
2024-01-11 15:27:06 -08:00
..
ext_server Support multiple variants for a given llm lib type 2024-01-10 17:27:51 -08:00
generate Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
llama.cpp@328b83de23 revert submodule back to 328b83de23b33240e28f4e74900d1d06726f5eb1 2024-01-10 18:42:39 -05:00
dyn_ext_server.c Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
dyn_ext_server.go Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
dyn_ext_server.h Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
ggml.go fix lint 2024-01-09 09:36:58 -08:00
gguf.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
llama.go remove unused fields and functions 2024-01-09 09:37:40 -08:00
llm.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_common.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_darwin.go Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
payload_linux.go Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
payload_test.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_windows.go Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00