ollama/llm
2023-11-20 19:54:04 -05:00
..
llama.cpp enable cpu instructions on intel macs 2023-11-19 23:20:26 -05:00
falcon.go starcoder 2023-10-02 19:56:51 -07:00
ggml.go ggufv3 2023-10-23 09:35:49 -07:00
gguf.go instead of static number of parameters for each model family, get the real number from the tensors (#1022) 2023-11-08 17:55:46 -08:00
llama.go only set main_gpu if value > 0 is provided 2023-11-20 19:54:04 -05:00
llm.go recent llama.cpp update added kernels for fp32, q5_0, and q5_1 2023-11-20 13:44:31 -08:00
starcoder.go starcoder 2023-10-02 19:56:51 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00