ollama/llm
Daniel Hiltgen abec7f06e5
Merge pull request #2056 from dhiltgen/slog
Mechanical switch from log to slog
2024-01-18 14:27:24 -08:00
..
ext_server Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
generate Merge pull request #1987 from xyproto/archlinux 2024-01-18 13:32:10 -08:00
llama.cpp@584d674be6 Bump llama.cpp to b1842 and add new cuda lib dep 2024-01-16 12:53:52 -08:00
dyn_ext_server.c Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
dyn_ext_server.go Mechanical switch from log to slog 2024-01-18 14:12:57 -08:00
dyn_ext_server.h Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
ggml.go add max context length check 2024-01-12 14:54:07 -08:00
gguf.go add max context length check 2024-01-12 14:54:07 -08:00
llama.go remove unused fields and functions 2024-01-09 09:37:40 -08:00
llm.go Mechanical switch from log to slog 2024-01-18 14:12:57 -08:00
payload_common.go Mechanical switch from log to slog 2024-01-18 14:12:57 -08:00
payload_darwin_amd64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_darwin_arm64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_linux.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_test.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_windows.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00