ollama/examples/kubernetes/README.md
Mélony QIN 3f71ba406a
Correct the kubernetes terminology (#3843)
* add details on kubernetes deployment and separate the testing process

* Update examples/kubernetes/README.md

thanks for suggesting this change, I agree with you and let's make this project better together !

Co-authored-by: JonZeolla <Zeolla@gmail.com>

---------

Co-authored-by: QIN Mélony <MQN1@dsone.3ds.com>
Co-authored-by: JonZeolla <Zeolla@gmail.com>
2024-05-07 09:53:08 -07:00

866 B

Deploy Ollama to Kubernetes

Prerequisites

Steps

  1. Create the Ollama namespace, deployment, and service

    kubectl apply -f cpu.yaml
    

(Optional) Hardware Acceleration

Hardware acceleration in Kubernetes requires NVIDIA's k8s-device-plugin which is deployed in Kubernetes in form of daemonset. Follow the link for more details.

Once configured, create a GPU enabled Ollama deployment.

kubectl apply -f gpu.yaml

Test

  1. Port forward the Ollama service to connect and use it locally

    kubectl -n ollama port-forward service/ollama 11434:80
    
  2. Pull and run a model, for example orca-mini:3b

    ollama run orca-mini:3b