ollama/docs
Daniel Hiltgen 283948c83b Adjust windows ROCm discovery
The v5 hip library returns unsupported GPUs which wont enumerate at
inference time in the runner so this makes sure we align discovery.  The
gfx906 cards are no longer supported so we shouldn't compile with that
GPU type as it wont enumerate at runtime.
2024-07-20 15:17:50 -07:00
..
tutorials add embed model command and fix question invoke (#4766) 2024-06-03 22:20:48 -07:00
api.md Update api.md 2024-06-29 16:22:49 -07:00
development.md update llama.cpp submodule to d7fd29f (#5475) 2024-07-05 13:25:58 -04:00
docker.md Doc container usage and workaround for nvidia errors 2024-05-09 09:26:45 -07:00
faq.md Bump ROCm on windows to 6.1.2 2024-07-10 11:01:22 -07:00
gpu.md Adjust windows ROCm discovery 2024-07-20 15:17:50 -07:00
import.md Update import.md 2024-06-17 19:44:14 -04:00
linux.md Add instructions to easily install specific versions on faq.md (#4084) 2024-06-09 10:49:03 -07:00
modelfile.md Update 'llama2' -> 'llama3' in most places (#4116) 2024-05-03 15:25:04 -04:00
openai.md OpenAI: Add Suffix to v1/completions (#5611) 2024-07-16 20:50:14 -07:00
README.md Doc container usage and workaround for nvidia errors 2024-05-09 09:26:45 -07:00
troubleshooting.md Document older win10 terminal problems 2024-07-03 17:32:14 -07:00
tutorials.md Created tutorial for running Ollama on NVIDIA Jetson devices (#1098) 2023-11-15 12:32:37 -05:00
windows.md Document older win10 terminal problems 2024-07-03 17:32:14 -07:00