reorganize README.md files

This commit is contained in:
Jeffrey Morgan 2023-06-28 09:57:36 -04:00
parent 9934ad77c0
commit e1388938d4
2 changed files with 64 additions and 28 deletions

View file

@ -1,21 +1,44 @@
# Ollama
The easiest way to run ai models.
Run ai models locally.
## Download
_Note: this project is a work in progress. The features below are still in development_
- [macOS](https://ollama.ai/download/darwin_arm64) (Apple Silicon)
- macOS (Intel Coming soon)
- Windows (Coming soon)
- Linux (Coming soon)
**Features**
## Python SDK
- Run models locally on macOS (Windows, Linux and other platforms coming soon)
- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, core ml and other loaders coming soon)
- Import models from local files
- Find and download models on Hugging Face and other sources (coming soon)
- Support for running and switching between multiple models at a time (coming soon)
- Native desktop experience (coming soon)
- Built-in memory (coming soon)
## Install
```
pip install ollama
```
### Python SDK quickstart
## Quickstart
```
% ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
Pulling huggingface.co/TheBloke/orca_mini_3B-GGML...
Downloading [================> ] 66.67% (2/3) 30.2MB/s
...
...
...
> Hello
Hello, how may I help you?
```
## Python SDK
### Example
```python
import ollama
@ -30,14 +53,6 @@ Generate a completion
ollama.generate("./llama-7b-ggml.bin", "hi")
```
### `ollama.load(model)`
Load a model for generation
```python
ollama.load("model")
```
### `ollama.models()`
List available local models
@ -58,6 +73,22 @@ Add a model by importing from a file
ollama.add("./path/to/model")
```
### `ollama.load(model)`
Manually a model for generation
```python
ollama.load("model")
```
### `ollama.unload(model)`
Unload a model
```python
ollama.unload("model")
```
## Cooming Soon
### `ollama.pull(model)`
@ -76,15 +107,6 @@ Search for compatible models that Ollama can run
ollama.search("llama-7b")
```
## Future CLI
In the future, there will be an `ollama` CLI for running models on servers, in containers or for local development environments.
```
ollama generate huggingface.co/thebloke/llama-7b-ggml "hi"
> Downloading [================> ] 66.67% (2/3) 30.2MB/s
```
## Documentation
- [Development](docs/development.md)

View file

@ -1,18 +1,32 @@
# Desktop
The Ollama desktop experience
The Ollama desktop experience. This is an experimental, easy-to-use app for running models with [`ollama`](https://github.com/jmorganca/ollama).
## Download
- [macOS](https://ollama.ai/download/darwin_arm64) (Apple Silicon)
- macOS (Intel Coming soon)
- Windows (Coming soon)
- Linux (Coming soon)
## Running
In the background run the `ollama.py` [development](../docs/development.md) server:
In the background run the ollama server `ollama.py` server:
```
python ../ollama.py serve --port 7734
```
Then run the desktop app:
Then run the desktop app with `npm start`:
```
npm install
npm start
```
## Coming soon
- Browse the latest available models on Hugging Face and other sources
- Keep track of previous conversations with models
- Switch between models
- Connect to remote Ollama servers to run models