From 27a7ce600881babe17c053622a5fbbf334972cf4 Mon Sep 17 00:00:00 2001 From: Jeffrey Morgan Date: Wed, 28 Jun 2023 10:19:07 -0400 Subject: [PATCH] correct spelling for Core ML --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2389ab47..97a7432e 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ _Note: this project is a work in progress. The features below are still in devel **Features** - Run models locally on macOS (Windows, Linux and other platforms coming soon) -- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, core ml and other loaders coming soon) +- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, Core ML and other loaders coming soon) - Import models from local files - Find and download models on Hugging Face and other sources (coming soon) - Support for running and switching between multiple models at a time (coming soon) @@ -42,7 +42,7 @@ Hello, how may I help you? ```python import ollama -ollama.generate("./llama-7b-ggml.bin", "hi") +ollama.generate("orca-mini-3b", "hi") ``` ### `ollama.generate(model, message)`