ollama/docs
Daniel Hiltgen 20f6c06569 Make maximum pending request configurable
This also bumps up the default to be 50 queued requests
instead of 10.
2024-05-04 21:00:52 -07:00
..
tutorials Update 'llama2' -> 'llama3' in most places (#4116) 2024-05-03 15:25:04 -04:00
api.md Update 'llama2' -> 'llama3' in most places (#4116) 2024-05-03 15:25:04 -04:00
development.md chore: fix typo in docs/development.md (#4073) 2024-05-01 15:39:11 -04:00
faq.md Make maximum pending request configurable 2024-05-04 21:00:52 -07:00
gpu.md Add docs for GPU selection and nvidia uvm workaround 2024-03-21 11:52:54 +01:00
import.md Update import.md 2024-02-22 02:08:03 -05:00
linux.md Finish unwinding idempotent payload logic 2024-03-09 08:34:39 -08:00
modelfile.md Update 'llama2' -> 'llama3' in most places (#4116) 2024-05-03 15:25:04 -04:00
openai.md Update 'llama2' -> 'llama3' in most places (#4116) 2024-05-03 15:25:04 -04:00
README.md Update README.md 2024-03-13 21:12:17 -07:00
troubleshooting.md Safeguard for noexec 2024-04-01 16:48:33 -07:00
tutorials.md Created tutorial for running Ollama on NVIDIA Jetson devices (#1098) 2023-11-15 12:32:37 -05:00
windows.md Explain the 2 different windows download options 2024-05-04 12:50:05 -07:00