ollama/llm
Daniel Hiltgen 3269535a4c Refine handling of shim presence
This allows the CPU only builds to work on systems with Radeon cards
2023-12-19 09:05:46 -08:00
..
llama.cpp Refine build to support CPU only 2023-12-19 09:05:46 -08:00
ext_server.go Refine build to support CPU only 2023-12-19 09:05:46 -08:00
ggml.go deprecate ggml 2023-12-19 09:05:46 -08:00
gguf.go remove per-model types 2023-12-11 09:40:21 -08:00
llama.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
llm.go Refine handling of shim presence 2023-12-19 09:05:46 -08:00
rocm_shim.c Build linux using ubuntu 20.04 2023-12-19 09:05:46 -08:00
rocm_shim.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
shim_darwin.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
shim_ext_server.go Refine handling of shim presence 2023-12-19 09:05:46 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00