diff --git a/docs/faq.md b/docs/faq.md index dffe90fd..2d7bca5e 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -97,7 +97,7 @@ Refer to the section [above](#how-do-i-configure-ollama-server) for how to set e ## How can I use Ollama with a proxy server? -Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers if not exposing Ollama on the network (see the previous question). For example, with Nginx: +Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers (if not exposing Ollama on the network). For example, with Nginx: ``` server { @@ -225,4 +225,4 @@ will be utilized by setting the environment variable `CUDA_VISIBLE_DEVICES` for NVIDIA cards, or `HIP_VISIBLE_DEVICES` for Radeon GPUs to a comma delimited list of GPU IDs. You can see the list of devices with GPU tools such as `nvidia-smi` or `rocminfo`. You can set to an invalid GPU ID (e.g., "-1") to bypass the GPU and -fallback to CPU. \ No newline at end of file +fallback to CPU.