ollama/app
Bruce MacDonald 42998d797d
subprocess llama.cpp server (#401)
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
..
assets app: fix tray icon color scheme in dark mode 2023-07-28 07:03:46 -04:00
src subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
.eslintrc.json move desktop -> app 2023-07-02 17:26:55 -04:00
.gitignore move desktop -> app 2023-07-02 17:26:55 -04:00
forge.config.ts app: package ggml-metal.metal from correct directory 2023-08-17 23:55:45 -04:00
package-lock.json build for universal architecture on macos 2023-07-28 12:18:11 -04:00
package.json build for universal architecture on macos 2023-07-28 12:18:11 -04:00
postcss.config.js move desktop -> app 2023-07-02 17:26:55 -04:00
README.md build for universal architecture on macos 2023-07-28 12:18:11 -04:00
tailwind.config.js move desktop -> app 2023-07-02 17:26:55 -04:00
tsconfig.json move desktop -> app 2023-07-02 17:26:55 -04:00
webpack.main.config.ts fix env var loading 2023-07-07 10:27:33 -04:00
webpack.plugins.ts fix env var loading 2023-07-07 10:27:33 -04:00
webpack.renderer.config.ts move desktop -> app 2023-07-02 17:26:55 -04:00
webpack.rules.ts ollama app welcome screen for first time run (#80) 2023-07-14 16:34:24 -07:00

Desktop

This app builds upon Ollama to provide a desktop experience for running models.

Developing

First, build the ollama binary:

cd ..
go build .

Then run the desktop app with npm start:

cd app
npm install
npm start