ollama/llm
Bruce MacDonald 42998d797d
subprocess llama.cpp server (#401)
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
..
llama.cpp subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
ggml.go add 34b model type 2023-08-24 10:35:44 -07:00
ggml_llama.go subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
llama_test.go treat stop as stop sequences, not exact tokens (#442) 2023-08-30 11:53:42 -04:00
llm.go subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00