correct spelling for Core ML
This commit is contained in:
parent
e1388938d4
commit
27a7ce6008
1 changed files with 2 additions and 2 deletions
|
@ -7,7 +7,7 @@ _Note: this project is a work in progress. The features below are still in devel
|
||||||
**Features**
|
**Features**
|
||||||
|
|
||||||
- Run models locally on macOS (Windows, Linux and other platforms coming soon)
|
- Run models locally on macOS (Windows, Linux and other platforms coming soon)
|
||||||
- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, core ml and other loaders coming soon)
|
- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, Core ML and other loaders coming soon)
|
||||||
- Import models from local files
|
- Import models from local files
|
||||||
- Find and download models on Hugging Face and other sources (coming soon)
|
- Find and download models on Hugging Face and other sources (coming soon)
|
||||||
- Support for running and switching between multiple models at a time (coming soon)
|
- Support for running and switching between multiple models at a time (coming soon)
|
||||||
|
@ -42,7 +42,7 @@ Hello, how may I help you?
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import ollama
|
import ollama
|
||||||
ollama.generate("./llama-7b-ggml.bin", "hi")
|
ollama.generate("orca-mini-3b", "hi")
|
||||||
```
|
```
|
||||||
|
|
||||||
### `ollama.generate(model, message)`
|
### `ollama.generate(model, message)`
|
||||||
|
|
Loading…
Reference in a new issue