add llama2-uncensored
to model list
This commit is contained in:
parent
528bafa585
commit
58daeb962a
1 changed files with 10 additions and 9 deletions
19
README.md
19
README.md
|
@ -31,14 +31,15 @@ ollama run llama2
|
||||||
|
|
||||||
`ollama` includes a library of open-source models:
|
`ollama` includes a library of open-source models:
|
||||||
|
|
||||||
| Model | Parameters | Size | Download |
|
| Model | Parameters | Size | Download |
|
||||||
| ------------------------ | ---------- | ----- | --------------------------- |
|
| ------------------------ | ---------- | ----- | ------------------------------- |
|
||||||
| Llama2 | 7B | 3.8GB | `ollama pull llama2` |
|
| Llama2 | 7B | 3.8GB | `ollama pull llama2` |
|
||||||
| Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` |
|
| Llama2 Uncensored | 7B | 3.8GB | `ollama pull llama2-uncensored` |
|
||||||
| Orca Mini | 3B | 1.9GB | `ollama pull orca` |
|
| Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` |
|
||||||
| Vicuna | 7B | 3.8GB | `ollama pull vicuna` |
|
| Orca Mini | 3B | 1.9GB | `ollama pull orca` |
|
||||||
| Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` |
|
| Vicuna | 7B | 3.8GB | `ollama pull vicuna` |
|
||||||
| Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` |
|
| Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` |
|
||||||
|
| Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` |
|
||||||
|
|
||||||
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
|
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
|
||||||
|
|
||||||
|
@ -152,4 +153,4 @@ curl -X POST http://localhost:11434/api/create -d '{"name": "my-model", "path":
|
||||||
|
|
||||||
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord.
|
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord.
|
||||||
|
|
||||||
- [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast.
|
- [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast.
|
||||||
|
|
Loading…
Add table
Reference in a new issue