ollama/docs
Daniel Hiltgen 6606e4243c
docs: Capture docker cgroup workaround (#7519)
GPU support can break on some systems after a while.  This captures a
known workaround to solve the problem.
2024-11-12 09:12:50 -08:00
..
images Fix import image width (#6528) 2024-08-27 14:19:47 -07:00
tutorials docs: update langchainpy.md with proper model name (#7527) 2024-11-08 09:36:17 -08:00
api.md runner.go: Remove unused arguments 2024-11-06 13:32:18 -08:00
development.md docs: OLLAMA_NEW_RUNNERS no longer exists 2024-11-06 14:39:02 -08:00
docker.md update default model to llama3.2 (#6959) 2024-09-25 11:11:22 -07:00
faq.md update default model to llama3.2 (#6959) 2024-09-25 11:11:22 -07:00
gpu.md Better support for AMD multi-GPU on linux (#7212) 2024-10-26 14:04:14 -07:00
import.md docs: add mentions of Llama 3.2 (#7517) 2024-11-10 19:04:23 -08:00
linux.md docs: improve linux install documentation (#6683) 2024-09-06 22:05:37 -07:00
modelfile.md docs: add mentions of Llama 3.2 (#7517) 2024-11-10 19:04:23 -08:00
openai.md fix #7247 - invalid image input (#7249) 2024-10-23 10:31:04 -07:00
README.md Doc container usage and workaround for nvidia errors 2024-05-09 09:26:45 -07:00
template.md update default model to llama3.2 (#6959) 2024-09-25 11:11:22 -07:00
troubleshooting.md docs: Capture docker cgroup workaround (#7519) 2024-11-12 09:12:50 -08:00
tutorials.md Created tutorial for running Ollama on NVIDIA Jetson devices (#1098) 2023-11-15 12:32:37 -05:00
windows.md Move windows app out of preview (#7347) 2024-10-30 09:24:59 -07:00