Update faq.md
This commit is contained in:
parent
2297ad39da
commit
7ed3e94105
1 changed files with 2 additions and 2 deletions
|
@ -97,7 +97,7 @@ Refer to the section [above](#how-do-i-configure-ollama-server) for how to set e
|
|||
|
||||
## How can I use Ollama with a proxy server?
|
||||
|
||||
Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers if not exposing Ollama on the network (see the previous question). For example, with Nginx:
|
||||
Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers (if not exposing Ollama on the network). For example, with Nginx:
|
||||
|
||||
```
|
||||
server {
|
||||
|
@ -225,4 +225,4 @@ will be utilized by setting the environment variable `CUDA_VISIBLE_DEVICES` for
|
|||
NVIDIA cards, or `HIP_VISIBLE_DEVICES` for Radeon GPUs to a comma delimited list
|
||||
of GPU IDs. You can see the list of devices with GPU tools such as `nvidia-smi` or
|
||||
`rocminfo`. You can set to an invalid GPU ID (e.g., "-1") to bypass the GPU and
|
||||
fallback to CPU.
|
||||
fallback to CPU.
|
||||
|
|
Loading…
Reference in a new issue