ollama/llm
2023-11-20 10:52:52 -05:00
..
llama.cpp enable cpu instructions on intel macs 2023-11-19 23:20:26 -05:00
falcon.go starcoder 2023-10-02 19:56:51 -07:00
ggml.go ggufv3 2023-10-23 09:35:49 -07:00
gguf.go instead of static number of parameters for each model family, get the real number from the tensors (#1022) 2023-11-08 17:55:46 -08:00
llama.go main-gpu argument is not getting passed to llamacpp, fixed. (#1192) 2023-11-20 10:52:52 -05:00
llm.go JSON mode: add `"format" as an api parameter (#1051) 2023-11-09 16:44:02 -08:00
starcoder.go starcoder 2023-10-02 19:56:51 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00