ollama/llm
2024-03-24 11:35:54 -07:00
..
ext_server llama: remove server static assets (#3174) 2024-03-15 19:24:12 -07:00
generate Merge pull request #3028 from ollama/ci_release 2024-03-15 16:40:54 -07:00
llama.cpp@ceca1aef07 update llama.cpp submodule to ceca1ae (#3064) 2024-03-11 12:57:48 -07:00
patches fix: clip memory leak 2024-03-14 13:12:42 -07:00
dyn_ext_server.c Revamp ROCm support 2024-03-07 10:36:50 -08:00
dyn_ext_server.go dyn global 2024-03-18 09:45:45 +01:00
dyn_ext_server.h Always dynamically load the llm server library 2024-01-11 08:42:47 -08:00
ggla.go refactor readseeker 2024-03-12 12:54:18 -07:00
ggml.go refactor readseeker 2024-03-12 12:54:18 -07:00
gguf.go Merge pull request #3083 from ollama/mxyng/refactor-readseeker 2024-03-16 12:08:56 -07:00
llama.go use llm.ImageData 2024-01-31 19:13:48 -08:00
llm.go disable gpu for certain model architectures and fix divide-by-zero on memory estimation 2024-03-09 12:51:38 -08:00
payload_common.go llm: prevent race appending to slice (#3320) 2024-03-24 11:35:54 -07:00
payload_darwin_amd64.go update llama.cpp submodule to 77d1ac7 (#3030) 2024-03-09 15:55:34 -08:00
payload_darwin_arm64.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
payload_linux.go Revamp ROCm support 2024-03-07 10:36:50 -08:00
payload_test.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
payload_windows.go Add multiple CPU variants for Intel Mac 2024-01-17 15:08:54 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00