add llama3 to readme
add llama3 to readme
This commit is contained in:
parent
8645076a71
commit
554ffdcce3
1 changed files with 14 additions and 13 deletions
27
README.md
27
README.md
|
@ -35,10 +35,10 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla
|
||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
To run and chat with [Llama 2](https://ollama.com/library/llama2):
|
To run and chat with [Llama 3](https://ollama.com/library/llama3):
|
||||||
|
|
||||||
```
|
```
|
||||||
ollama run llama2
|
ollama run llama3
|
||||||
```
|
```
|
||||||
|
|
||||||
## Model library
|
## Model library
|
||||||
|
@ -49,7 +49,8 @@ Here are some example models that can be downloaded:
|
||||||
|
|
||||||
| Model | Parameters | Size | Download |
|
| Model | Parameters | Size | Download |
|
||||||
| ------------------ | ---------- | ----- | ------------------------------ |
|
| ------------------ | ---------- | ----- | ------------------------------ |
|
||||||
| Llama 2 | 7B | 3.8GB | `ollama run llama2` |
|
| Llama 3 | 8B | 4.7GB | `ollama run llama3` |
|
||||||
|
| Llama 3 | 70B | 40GB | `ollama run llama3:70b` |
|
||||||
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
||||||
| Dolphin Phi | 2.7B | 1.6GB | `ollama run dolphin-phi` |
|
| Dolphin Phi | 2.7B | 1.6GB | `ollama run dolphin-phi` |
|
||||||
| Phi-2 | 2.7B | 1.7GB | `ollama run phi` |
|
| Phi-2 | 2.7B | 1.7GB | `ollama run phi` |
|
||||||
|
@ -97,16 +98,16 @@ See the [guide](docs/import.md) on importing models for more information.
|
||||||
|
|
||||||
### Customize a prompt
|
### Customize a prompt
|
||||||
|
|
||||||
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
|
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3` model:
|
||||||
|
|
||||||
```
|
```
|
||||||
ollama pull llama2
|
ollama pull llama3
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a `Modelfile`:
|
Create a `Modelfile`:
|
||||||
|
|
||||||
```
|
```
|
||||||
FROM llama2
|
FROM llama3
|
||||||
|
|
||||||
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
||||||
PARAMETER temperature 1
|
PARAMETER temperature 1
|
||||||
|
@ -141,7 +142,7 @@ ollama create mymodel -f ./Modelfile
|
||||||
### Pull a model
|
### Pull a model
|
||||||
|
|
||||||
```
|
```
|
||||||
ollama pull llama2
|
ollama pull llama3
|
||||||
```
|
```
|
||||||
|
|
||||||
> This command can also be used to update a local model. Only the diff will be pulled.
|
> This command can also be used to update a local model. Only the diff will be pulled.
|
||||||
|
@ -149,13 +150,13 @@ ollama pull llama2
|
||||||
### Remove a model
|
### Remove a model
|
||||||
|
|
||||||
```
|
```
|
||||||
ollama rm llama2
|
ollama rm llama3
|
||||||
```
|
```
|
||||||
|
|
||||||
### Copy a model
|
### Copy a model
|
||||||
|
|
||||||
```
|
```
|
||||||
ollama cp llama2 my-llama2
|
ollama cp llama3 my-llama2
|
||||||
```
|
```
|
||||||
|
|
||||||
### Multiline input
|
### Multiline input
|
||||||
|
@ -179,7 +180,7 @@ The image features a yellow smiley face, which is likely the central focus of th
|
||||||
### Pass in prompt as arguments
|
### Pass in prompt as arguments
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ollama run llama2 "Summarize this file: $(cat README.md)"
|
$ ollama run llama3 "Summarize this file: $(cat README.md)"
|
||||||
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -226,7 +227,7 @@ Next, start the server:
|
||||||
Finally, in a separate shell, run a model:
|
Finally, in a separate shell, run a model:
|
||||||
|
|
||||||
```
|
```
|
||||||
./ollama run llama2
|
./ollama run llama3
|
||||||
```
|
```
|
||||||
|
|
||||||
## REST API
|
## REST API
|
||||||
|
@ -237,7 +238,7 @@ Ollama has a REST API for running and managing models.
|
||||||
|
|
||||||
```
|
```
|
||||||
curl http://localhost:11434/api/generate -d '{
|
curl http://localhost:11434/api/generate -d '{
|
||||||
"model": "llama2",
|
"model": "llama3",
|
||||||
"prompt":"Why is the sky blue?"
|
"prompt":"Why is the sky blue?"
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
@ -246,7 +247,7 @@ curl http://localhost:11434/api/generate -d '{
|
||||||
|
|
||||||
```
|
```
|
||||||
curl http://localhost:11434/api/chat -d '{
|
curl http://localhost:11434/api/chat -d '{
|
||||||
"model": "mistral",
|
"model": "llama3",
|
||||||
"messages": [
|
"messages": [
|
||||||
{ "role": "user", "content": "why is the sky blue?" }
|
{ "role": "user", "content": "why is the sky blue?" }
|
||||||
]
|
]
|
||||||
|
|
Loading…
Reference in a new issue