ollama/examples/kubernetes
Michael Yang 0409c1fa59
docker: set PATH, LD_LIBRARY_PATH, and capabilities (#1336)
* docker: set PATH, LD_LIBRARY_PATH, and capabilities

* example: update k8s gpu manifest
2023-11-30 21:16:56 -08:00
..
cpu.yaml k8s example 2023-11-01 14:52:58 -07:00
gpu.yaml docker: set PATH, LD_LIBRARY_PATH, and capabilities (#1336) 2023-11-30 21:16:56 -08:00
README.md Apply suggestions from code review 2023-11-06 11:32:23 -08:00

Deploy Ollama to Kubernetes

Prerequisites

Steps

  1. Create the Ollama namespace, daemon set, and service

    kubectl apply -f cpu.yaml
    
  2. Port forward the Ollama service to connect and use it locally

    kubectl -n ollama port-forward service/ollama 11434:80
    
  3. Pull and run a model, for example orca-mini:3b

    ollama run orca-mini:3b
    

(Optional) Hardware Acceleration

Hardware acceleration in Kubernetes requires NVIDIA's k8s-device-plugin. Follow the link for more details.

Once configured, create a GPU enabled Ollama deployment.

kubectl apply -f gpu.yaml