ollama/server
Bruce MacDonald 42998d797d
subprocess llama.cpp server (#401)
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
..
auth.go loosen http status code checks 2023-08-28 18:34:53 -04:00
download.go loosen http status code checks 2023-08-28 18:34:53 -04:00
images.go subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
modelpath.go use url.URL 2023-08-22 10:49:07 -07:00
modelpath_test.go fix FROM instruction erroring when referring to a file 2023-08-22 09:39:42 -07:00
routes.go subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
upload.go remove unused parameter 2023-08-28 18:35:18 -04:00