ollama/docs/tutorials/nvidia-jetson.md

15 lines
812 B
Markdown
Raw Normal View History

# Running Ollama on NVIDIA Jetson Devices
2024-04-17 20:17:42 +00:00
Ollama runs well on [NVIDIA Jetson Devices](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) and should run out of the box with the standard installation instructions.
2024-04-17 20:17:42 +00:00
The following has been tested on [JetPack 5.1.2](https://developer.nvidia.com/embedded/jetpack), but should also work on JetPack 6.0.
- Install Ollama via standard Linux command (ignore the 404 error): `curl https://ollama.com/install.sh | sh`
- Pull the model you want to use (e.g. mistral): `ollama pull mistral`
2024-04-17 20:17:42 +00:00
- Start an interactive session: `ollama run mistral`
2024-04-17 20:17:42 +00:00
And that's it!
2024-04-17 20:17:42 +00:00
# Running Ollama in Docker
2024-04-17 20:17:42 +00:00
When running GPU accelerated applications in Docker, it is highly recommended to use [dusty-nv jetson-containers repo](https://github.com/dusty-nv/jetson-containers).