ollama/llm
Daniel Hiltgen e02ecfb6c8
Merge pull request #2116 from dhiltgen/cc_50_80
Add support for CUDA 5.0 cards
2024-01-27 10:28:38 -08:00
..
ext_server Refine debug logging for llm 2024-01-22 12:26:49 -08:00
generate Merge pull request #2116 from dhiltgen/cc_50_80 2024-01-27 10:28:38 -08:00
llama.cpp@cd4fddb29f update submodule to cd4fddb29f81d6a1f6d51a0c016bc6b486d68def 2024-01-25 13:54:11 -08:00
patches Fix clearing kv cache between requests with the same prompt (#2186) 2024-01-25 13:46:20 -08:00
dyn_ext_server.c Switch to local dlopen symbols 2024-01-19 11:37:02 -08:00
dyn_ext_server.go Fix clearing kv cache between requests with the same prompt (#2186) 2024-01-25 13:46:20 -08:00
dyn_ext_server.h Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
ggml.go add max context length check 2024-01-12 14:54:07 -08:00
gguf.go refactor tensor read 2024-01-24 10:48:31 -08:00
llama.go remove unused fields and functions 2024-01-09 09:37:40 -08:00
llm.go Load all layers on arm64 macOS if model is small enough (#2149) 2024-01-22 17:40:06 -08:00
payload_common.go use gzip for runner embedding (#2067) 2024-01-19 13:23:03 -05:00
payload_darwin_amd64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_darwin_arm64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_linux.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_test.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_windows.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00