ollama/llm
2023-09-06 19:56:50 -04:00
..
llama.cpp set minimum CMAKE_OSX_DEPLOYMENT_TARGET to 11.0 2023-09-06 19:56:50 -04:00
ggml.go add 34b model type 2023-08-24 10:35:44 -07:00
ggml_llama.go use osPath in gpu check 2023-09-05 21:52:21 -04:00
llm.go subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00