2023-09-25 04:34:44 +00:00
|
|
|
|
# Installing Ollama on Linux
|
|
|
|
|
|
|
|
|
|
> Note: A one line installer for Ollama is available by running:
|
|
|
|
|
>
|
|
|
|
|
> ```
|
|
|
|
|
> curl https://ollama.ai/install.sh | sh
|
|
|
|
|
> ```
|
|
|
|
|
|
|
|
|
|
## Download the `ollama` binary
|
|
|
|
|
|
|
|
|
|
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
|
2023-09-25 04:38:23 +00:00
|
|
|
|
sudo chmod +x /usr/bin/ollama
|
2023-09-25 04:34:44 +00:00
|
|
|
|
```
|
|
|
|
|
|
2023-09-25 04:38:23 +00:00
|
|
|
|
## Start Ollama
|
|
|
|
|
|
|
|
|
|
Start Ollama by running `ollama serve`:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
ollama serve
|
|
|
|
|
```
|
|
|
|
|
|
2023-09-25 04:50:07 +00:00
|
|
|
|
Once Ollama is running, run a model in another terminal session:
|
2023-09-25 04:38:23 +00:00
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
ollama run llama2
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Install CUDA drivers (optional – for Nvidia GPUs)
|
2023-09-25 04:34:44 +00:00
|
|
|
|
|
|
|
|
|
[Download and install](https://developer.nvidia.com/cuda-downloads) CUDA.
|
|
|
|
|
|
|
|
|
|
Verify that the drivers are installed by running the following command, which should print details about your GPU:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
nvidia-smi
|
|
|
|
|
```
|
|
|
|
|
|
2023-09-25 04:38:23 +00:00
|
|
|
|
## Adding Ollama as a startup service (optional)
|
2023-09-25 04:34:44 +00:00
|
|
|
|
|
|
|
|
|
Create a user for Ollama:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Create a service file in `/etc/systemd/system/ollama.service`:
|
|
|
|
|
|
|
|
|
|
```ini
|
|
|
|
|
[Unit]
|
|
|
|
|
Description=Ollama Service
|
|
|
|
|
After=network-online.target
|
|
|
|
|
|
|
|
|
|
[Service]
|
|
|
|
|
ExecStart=/usr/bin/ollama serve
|
|
|
|
|
User=ollama
|
|
|
|
|
Group=ollama
|
|
|
|
|
Restart=always
|
|
|
|
|
RestartSec=3
|
|
|
|
|
Environment="HOME=/usr/share/ollama"
|
|
|
|
|
|
|
|
|
|
[Install]
|
|
|
|
|
WantedBy=default.target
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Then start the service:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
sudo systemctl daemon-reload
|
|
|
|
|
sudo systemctl enable ollama
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Run a model
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
ollama run llama2
|
|
|
|
|
```
|