42998d797d
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm |
||
---|---|---|
.. | ||
app.css | ||
app.tsx | ||
declarations.d.ts | ||
index.html | ||
index.ts | ||
install.ts | ||
ollama.svg | ||
preload.ts | ||
renderer.tsx | ||
telemetry.ts |