From 58daeb962af4f7f5fef4f4a0da2522990c3d31bc Mon Sep 17 00:00:00 2001 From: Jeffrey Morgan Date: Tue, 1 Aug 2023 11:25:01 -0400 Subject: [PATCH] add `llama2-uncensored` to model list --- README.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index c4e93a09..1f470ce8 100644 --- a/README.md +++ b/README.md @@ -31,14 +31,15 @@ ollama run llama2 `ollama` includes a library of open-source models: -| Model | Parameters | Size | Download | -| ------------------------ | ---------- | ----- | --------------------------- | -| Llama2 | 7B | 3.8GB | `ollama pull llama2` | -| Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` | -| Orca Mini | 3B | 1.9GB | `ollama pull orca` | -| Vicuna | 7B | 3.8GB | `ollama pull vicuna` | -| Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` | -| Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` | +| Model | Parameters | Size | Download | +| ------------------------ | ---------- | ----- | ------------------------------- | +| Llama2 | 7B | 3.8GB | `ollama pull llama2` | +| Llama2 Uncensored | 7B | 3.8GB | `ollama pull llama2-uncensored` | +| Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` | +| Orca Mini | 3B | 1.9GB | `ollama pull orca` | +| Vicuna | 7B | 3.8GB | `ollama pull vicuna` | +| Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` | +| Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` | > Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models. @@ -152,4 +153,4 @@ curl -X POST http://localhost:11434/api/create -d '{"name": "my-model", "path": - [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord. -- [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast. +- [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast.