small README.md tweaks

This commit is contained in:
Jeffrey Morgan 2023-06-27 12:44:12 -04:00
parent ad4ffdf754
commit 5d99629c12

View file

@ -1,6 +1,6 @@
# Ollama # Ollama
- Run models, fast - Run models easily
- Download, manage and import models - Download, manage and import models
## Install ## Install
@ -23,7 +23,7 @@ ollama.generate(model_name, "hi")
### `ollama.load` ### `ollama.load`
Load a model from a path or a docker image Load a model for generation
```python ```python
ollama.load("model name") ollama.load("model name")
@ -39,7 +39,7 @@ ollama.generate(model, "hi")
### `ollama.models` ### `ollama.models`
List models List available local models
``` ```
models = ollama.models() models = ollama.models()
@ -53,7 +53,7 @@ Serve the ollama http server
### `ollama.pull` ### `ollama.pull`
Examples: Download a model
```python ```python
ollama.pull("huggingface.co/thebloke/llama-7b-ggml") ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
@ -61,7 +61,7 @@ ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
### `ollama.import` ### `ollama.import`
Import an existing model into the model store Import a model from a file
```python ```python
ollama.import("./path/to/model") ollama.import("./path/to/model")
@ -77,6 +77,9 @@ ollama.search("llama-7b")
## Future CLI ## Future CLI
In the future, there will be an easy CLI for testing out models
``` ```
ollama run huggingface.co/thebloke/llama-7b-ggml ollama run huggingface.co/thebloke/llama-7b-ggml
> Downloading [================> ] 66.67% (2/3) 30.2MB/s
``` ```