update README.md

This commit is contained in:
Jeffrey Morgan 2023-06-27 13:51:20 -04:00
parent 11614b6d84
commit 20cdd9fee6
3 changed files with 11 additions and 17 deletions

View file

@ -74,10 +74,10 @@ ollama.search("llama-7b")
## Future CLI
In the future, there will be an easy CLI for running models
In the future, there will be an `ollama` CLI for running models on servers, in containers or for local development environments.
```
ollama run huggingface.co/thebloke/llama-7b-ggml
ollama generaate huggingface.co/thebloke/llama-7b-ggml
> Downloading [================> ] 66.67% (2/3) 30.2MB/s
```

View file

@ -1,16 +1,18 @@
# Desktop
The Ollama desktop experience
The Ollama desktop app
## Running
In the background run the `ollama.py` [development](../docs/development.md) server:
```
python ../ollama.py serve --port 5001
```
Then run the desktop app:
```
npm install
npm start
```
## Packaging
```
npm run package
```

View file

@ -14,14 +14,6 @@ Put your model in `models/` and run:
python3 ollama.py serve
```
To run the app:
```
cd desktop
npm install
npm start
```
## Building
If using Apple silicon, you need a Python version that supports arm64: