ollama/docs/windows.md
Daniel Hiltgen 6c5ccb11f9 Revamp ROCm support
This refines where we extract the LLM libraries to by adding a new
OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
idempotenent, so this should speed up startups after the first time a
new release is deployed.  It also cleans up after itself.

We now build only a single ROCm version (latest major) on both windows
and linux.  Given the large size of ROCms tensor files, we split the
dependency out.  It's bundled into the installer on windows, and a
separate download on windows.  The linux install script is now smart and
detects the presence of AMD GPUs and looks to see if rocm v6 is already
present, and if not, then downloads our dependency tar file.

For Linux discovery, we now use sysfs and check each GPU against what
ROCm supports so we can degrade to CPU gracefully instead of having
llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
dynamic library loading logic to access the amdhip64.dll APIs to query
the GPU information.
2024-03-07 10:36:50 -08:00

2 KiB

Ollama Windows Preview

Welcome to the Ollama Windows preview.

No more WSL required!

Ollama now runs as a native Windows application, including NVIDIA and AMD Radeon GPU support. After installing Ollama Windows Preview, Ollama will run in the background and the ollama command line is available in cmd, powershell or your favorite terminal application. As usual the Ollama api will be served on http://localhost:11434.

As this is a preview release, you should expect a few bugs here and there. If you run into a problem you can reach out on Discord, or file an issue. Logs will often be helpful in dianosing the problem (see Troubleshooting below)

System Requirements

  • Windows 10 or newer, Home or Pro
  • NVIDIA 452.39 or newer Drivers if you have an NVIDIA card
  • AMD Radeon Driver https://www.amd.com/en/support if you have a Radeon card

API Access

Here's a quick example showing API access from powershell

(Invoke-WebRequest -method POST -Body '{"model":"llama2", "prompt":"Why is the sky blue?", "stream": false}' -uri http://localhost:11434/api/generate ).Content | ConvertFrom-json

Troubleshooting

While we're in preview, OLLAMA_DEBUG is always enabled, which adds a "view logs" menu item to the app, and increses logging for the GUI app and server.

Ollama on Windows stores files in a few different locations. You can view them in the explorer window by hitting <cmd>+R and type in:

  • explorer %LOCALAPPDATA%\Ollama contains logs, and downloaded updates
    • app.log contains logs from the GUI application
    • server.log contains the server logs
    • upgrade.log contains log output for upgrades
  • explorer %LOCALAPPDATA%\Programs\Ollama contains the binaries (The installer adds this to your user PATH)
  • explorer %HOMEPATH%\.ollama contains models and configuration
  • explorer %TEMP% contains temporary executable files in one or more ollama* directories