42998d797d
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm |
||
---|---|---|
.. | ||
assets | ||
src | ||
.eslintrc.json | ||
.gitignore | ||
forge.config.ts | ||
package-lock.json | ||
package.json | ||
postcss.config.js | ||
README.md | ||
tailwind.config.js | ||
tsconfig.json | ||
webpack.main.config.ts | ||
webpack.plugins.ts | ||
webpack.renderer.config.ts | ||
webpack.rules.ts |
Desktop
This app builds upon Ollama to provide a desktop experience for running models.
Developing
First, build the ollama
binary:
cd ..
go build .
Then run the desktop app with npm start
:
cd app
npm install
npm start