From 23a37dc46615f5fb1416d3f9eb49a11c09399493 Mon Sep 17 00:00:00 2001 From: Jeffrey Morgan Date: Thu, 20 Jul 2023 12:21:29 -0700 Subject: [PATCH] clean up `README.md` --- README.md | 43 ++++++++++++++++++++++--------------------- 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/README.md b/README.md index bf1aa25f..ec24fc2b 100644 --- a/README.md +++ b/README.md @@ -11,25 +11,7 @@ > Note: Ollama is in early preview. Please report any issues you find. -Create, run, and share portable large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that can run on any machine. - -### Portable Large Language Models (LLMs) - -Package models as a series of layers in a portable, easy to manage format. - -#### The idea behind Ollama - -- Universal model format that can run anywhere: desktop, cloud servers & other devices. -- Encapsulate everything a model needs to operate – weights, configuration, and data – into a single package. -- Build custom models from base models like Meta's [Llama 2](https://ai.meta.com/llama/) -- Share large models without having to transmit large amounts of data. - - - - logo - - -This format is inspired by the [image spec](https://github.com/opencontainers/image-spec) originally introduced by Docker for Linux containers. Ollama extends this format to package large language models. +Run, create, and share large language models (LLMs). ## Download @@ -47,7 +29,7 @@ ollama run llama2 ## Model library -Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models. +`ollama` includes a library of open-source models: | Model | Parameters | Size | Download | | ------------------------ | ---------- | ----- | --------------------------- | @@ -58,6 +40,8 @@ Ollama includes a library of open-source, pre-trained models. More models are co | Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` | | Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` | +> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models. + ## Examples ### Run a model @@ -68,7 +52,7 @@ ollama run llama2 Hello! How can I help you today? ``` -### Create a custom character model +### Create a custom model Pull a base model: @@ -107,6 +91,23 @@ For more examples, see the [examples](./examples) directory. ollama pull orca ``` +### Listing local models + +``` +ollama list +``` + +## Model packages + +### Overview + +Ollama bundles model weights, configuration, and data into a single package, defined by a [Modelfile](./docs/modelfile.md). + + + + logo + + ## Building ```