ollama/docs/linux.md

84 lines
1.5 KiB
Markdown
Raw Normal View History

2023-09-25 04:34:44 +00:00
# Installing Ollama on Linux
> Note: A one line installer for Ollama is available by running:
>
> ```bash
2023-09-25 04:34:44 +00:00
> curl https://ollama.ai/install.sh | sh
> ```
## Download the `ollama` binary
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
```bash
2023-09-25 04:34:44 +00:00
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
2023-09-25 04:38:23 +00:00
sudo chmod +x /usr/bin/ollama
2023-09-25 04:34:44 +00:00
```
2023-09-25 04:38:23 +00:00
## Start Ollama
Start Ollama by running `ollama serve`:
```bash
2023-09-25 04:38:23 +00:00
ollama serve
```
2023-09-25 04:50:07 +00:00
Once Ollama is running, run a model in another terminal session:
2023-09-25 04:38:23 +00:00
```bash
2023-09-25 04:38:23 +00:00
ollama run llama2
```
## Install CUDA drivers (optional for Nvidia GPUs)
2023-09-25 04:34:44 +00:00
[Download and install](https://developer.nvidia.com/cuda-downloads) CUDA.
Verify that the drivers are installed by running the following command, which should print details about your GPU:
```bash
2023-09-25 04:34:44 +00:00
nvidia-smi
```
2023-09-25 04:38:23 +00:00
## Adding Ollama as a startup service (optional)
2023-09-25 04:34:44 +00:00
Create a user for Ollama:
```bash
2023-09-25 04:34:44 +00:00
sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
```
Create a service file in `/etc/systemd/system/ollama.service`:
```ini
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="HOME=/usr/share/ollama"
[Install]
WantedBy=default.target
```
Then start the service:
```bash
2023-09-25 04:34:44 +00:00
sudo systemctl daemon-reload
sudo systemctl enable ollama
```
2023-09-25 23:10:32 +00:00
### Viewing logs
To view logs of Ollama running as a startup service, run:
```bash
2023-09-25 23:10:32 +00:00
journalctl -u ollama
```