ollama/llm
2024-03-08 00:26:20 -08:00
..
ext_server update llama.cpp submodule to 6cdabe6 (#2999) 2024-03-08 00:26:20 -08:00
generate Revamp ROCm support 2024-03-07 10:36:50 -08:00
llama.cpp@6cdabe6526 update llama.cpp submodule to 6cdabe6 (#2999) 2024-03-08 00:26:20 -08:00
patches update llama.cpp submodule to 6cdabe6 (#2999) 2024-03-08 00:26:20 -08:00
dyn_ext_server.c Revamp ROCm support 2024-03-07 10:36:50 -08:00
dyn_ext_server.go Revamp ROCm support 2024-03-07 10:36:50 -08:00
dyn_ext_server.h Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
ggml.go Convert Safetensors to an Ollama model (#2824) 2024-03-06 21:01:51 -08:00
gguf.go Convert Safetensors to an Ollama model (#2824) 2024-03-06 21:01:51 -08:00
llama.go use llm.ImageData 2024-01-31 19:13:48 -08:00
llm.go Revamp ROCm support 2024-03-07 10:36:50 -08:00
payload_common.go Revamp ROCm support 2024-03-07 10:36:50 -08:00
payload_darwin_amd64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_darwin_arm64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_linux.go Revamp ROCm support 2024-03-07 10:36:50 -08:00
payload_test.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_windows.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00