add some missing code directives in docs (#664)
This commit is contained in:
parent
0a4f21c0a7
commit
4fc10acce9
4 changed files with 24 additions and 25 deletions
|
@ -10,25 +10,25 @@ Install required tools:
|
||||||
- go version 1.20 or higher
|
- go version 1.20 or higher
|
||||||
- gcc version 11.4.0 or higher
|
- gcc version 11.4.0 or higher
|
||||||
|
|
||||||
```
|
```bash
|
||||||
brew install go cmake gcc
|
brew install go cmake gcc
|
||||||
```
|
```
|
||||||
|
|
||||||
Get the required libraries:
|
Get the required libraries:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
go generate ./...
|
go generate ./...
|
||||||
```
|
```
|
||||||
|
|
||||||
Then build ollama:
|
Then build ollama:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
go build .
|
go build .
|
||||||
```
|
```
|
||||||
|
|
||||||
Now you can run `ollama`:
|
Now you can run `ollama`:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
./ollama
|
./ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -2,13 +2,13 @@
|
||||||
|
|
||||||
## How can I expose the Ollama server?
|
## How can I expose the Ollama server?
|
||||||
|
|
||||||
```
|
```bash
|
||||||
OLLAMA_HOST=0.0.0.0:11435 ollama serve
|
OLLAMA_HOST=0.0.0.0:11435 ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, Ollama allows cross origin requests from `127.0.0.1` and `0.0.0.0`. To support more origins, you can use the `OLLAMA_ORIGINS` environment variable:
|
By default, Ollama allows cross origin requests from `127.0.0.1` and `0.0.0.0`. To support more origins, you can use the `OLLAMA_ORIGINS` environment variable:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
|
OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -16,4 +16,3 @@ OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
|
||||||
|
|
||||||
* macOS: Raw model data is stored under `~/.ollama/models`.
|
* macOS: Raw model data is stored under `~/.ollama/models`.
|
||||||
* Linux: Raw model data is stored under `/usr/share/ollama/.ollama/models`
|
* Linux: Raw model data is stored under `/usr/share/ollama/.ollama/models`
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
> Note: A one line installer for Ollama is available by running:
|
> Note: A one line installer for Ollama is available by running:
|
||||||
>
|
>
|
||||||
> ```
|
> ```bash
|
||||||
> curl https://ollama.ai/install.sh | sh
|
> curl https://ollama.ai/install.sh | sh
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
|
@ -10,7 +10,7 @@
|
||||||
|
|
||||||
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
|
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
|
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
|
||||||
sudo chmod +x /usr/bin/ollama
|
sudo chmod +x /usr/bin/ollama
|
||||||
```
|
```
|
||||||
|
@ -19,13 +19,13 @@ sudo chmod +x /usr/bin/ollama
|
||||||
|
|
||||||
Start Ollama by running `ollama serve`:
|
Start Ollama by running `ollama serve`:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
ollama serve
|
ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
Once Ollama is running, run a model in another terminal session:
|
Once Ollama is running, run a model in another terminal session:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
ollama run llama2
|
ollama run llama2
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ ollama run llama2
|
||||||
|
|
||||||
Verify that the drivers are installed by running the following command, which should print details about your GPU:
|
Verify that the drivers are installed by running the following command, which should print details about your GPU:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
nvidia-smi
|
nvidia-smi
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -43,7 +43,7 @@ nvidia-smi
|
||||||
|
|
||||||
Create a user for Ollama:
|
Create a user for Ollama:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
|
sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -68,7 +68,7 @@ WantedBy=default.target
|
||||||
|
|
||||||
Then start the service:
|
Then start the service:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable ollama
|
sudo systemctl enable ollama
|
||||||
```
|
```
|
||||||
|
@ -77,7 +77,7 @@ sudo systemctl enable ollama
|
||||||
|
|
||||||
To view logs of Ollama running as a startup service, run:
|
To view logs of Ollama running as a startup service, run:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
journalctl -u ollama
|
journalctl -u ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ INSTRUCTION arguments
|
||||||
|
|
||||||
An example of a model file creating a mario blueprint:
|
An example of a model file creating a mario blueprint:
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
FROM llama2
|
FROM llama2
|
||||||
# sets the temperature to 1 [higher is more creative, lower is more coherent]
|
# sets the temperature to 1 [higher is more creative, lower is more coherent]
|
||||||
PARAMETER temperature 1
|
PARAMETER temperature 1
|
||||||
|
@ -70,13 +70,13 @@ More examples are available in the [examples directory](../examples).
|
||||||
|
|
||||||
The FROM instruction defines the base model to use when creating a model.
|
The FROM instruction defines the base model to use when creating a model.
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
FROM <model name>:<tag>
|
FROM <model name>:<tag>
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Build from llama2
|
#### Build from llama2
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
FROM llama2
|
FROM llama2
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -85,7 +85,7 @@ A list of available base models:
|
||||||
|
|
||||||
#### Build from a bin file
|
#### Build from a bin file
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
FROM ./ollama-model.bin
|
FROM ./ollama-model.bin
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -95,7 +95,7 @@ This bin file location should be specified as an absolute path or relative to th
|
||||||
|
|
||||||
The EMBED instruction is used to add embeddings of files to a model. This is useful for adding custom data that the model can reference when generating an answer. Note that currently only text files are supported, formatted with each line as one embedding.
|
The EMBED instruction is used to add embeddings of files to a model. This is useful for adding custom data that the model can reference when generating an answer. Note that currently only text files are supported, formatted with each line as one embedding.
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
FROM <model name>:<tag>
|
FROM <model name>:<tag>
|
||||||
EMBED <file path>.txt
|
EMBED <file path>.txt
|
||||||
EMBED <different file path>.txt
|
EMBED <different file path>.txt
|
||||||
|
@ -106,7 +106,7 @@ EMBED <path to directory>/*.txt
|
||||||
|
|
||||||
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
PARAMETER <parameter> <parametervalue>
|
PARAMETER <parameter> <parametervalue>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -142,7 +142,7 @@ PARAMETER <parameter> <parametervalue>
|
||||||
| `{{ .Prompt }}` | The incoming prompt, this is not specified in the model file and will be set based on input. |
|
| `{{ .Prompt }}` | The incoming prompt, this is not specified in the model file and will be set based on input. |
|
||||||
| `{{ .First }}` | A boolean value used to render specific template information for the first generation of a session. |
|
| `{{ .First }}` | A boolean value used to render specific template information for the first generation of a session. |
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
TEMPLATE """
|
TEMPLATE """
|
||||||
{{- if .First }}
|
{{- if .First }}
|
||||||
### System:
|
### System:
|
||||||
|
@ -162,7 +162,7 @@ SYSTEM """<system message>"""
|
||||||
|
|
||||||
The `SYSTEM` instruction specifies the system prompt to be used in the template, if applicable.
|
The `SYSTEM` instruction specifies the system prompt to be used in the template, if applicable.
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
SYSTEM """<system message>"""
|
SYSTEM """<system message>"""
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ SYSTEM """<system message>"""
|
||||||
|
|
||||||
The `ADAPTER` instruction specifies the LoRA adapter to apply to the base model. The value of this instruction should be an absolute path or a path relative to the Modelfile and the file must be in a GGML file format. The adapter should be tuned from the base model otherwise the behaviour is undefined.
|
The `ADAPTER` instruction specifies the LoRA adapter to apply to the base model. The value of this instruction should be an absolute path or a path relative to the Modelfile and the file must be in a GGML file format. The adapter should be tuned from the base model otherwise the behaviour is undefined.
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
ADAPTER ./ollama-lora.bin
|
ADAPTER ./ollama-lora.bin
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -178,7 +178,7 @@ ADAPTER ./ollama-lora.bin
|
||||||
|
|
||||||
The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
|
The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
|
||||||
|
|
||||||
```
|
```modelfile
|
||||||
LICENSE """
|
LICENSE """
|
||||||
<license text>
|
<license text>
|
||||||
"""
|
"""
|
||||||
|
|
Loading…
Reference in a new issue