42998d797d
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
7 lines
46 B
Text
7 lines
46 B
Text
.DS_Store
|
|
.vscode
|
|
.env
|
|
.venv
|
|
.swp
|
|
dist
|
|
ollama
|