ollama/examples/kubernetes
Mélony QIN 3f71ba406a
Correct the kubernetes terminology (#3843)
* add details on kubernetes deployment and separate the testing process

* Update examples/kubernetes/README.md

thanks for suggesting this change, I agree with you and let's make this project better together !

Co-authored-by: JonZeolla <Zeolla@gmail.com>

---------

Co-authored-by: QIN Mélony <MQN1@dsone.3ds.com>
Co-authored-by: JonZeolla <Zeolla@gmail.com>
2024-05-07 09:53:08 -07:00
..
cpu.yaml k8s example 2023-11-01 14:52:58 -07:00
gpu.yaml docker: set PATH, LD_LIBRARY_PATH, and capabilities (#1336) 2023-11-30 21:16:56 -08:00
README.md Correct the kubernetes terminology (#3843) 2024-05-07 09:53:08 -07:00

Deploy Ollama to Kubernetes

Prerequisites

Steps

  1. Create the Ollama namespace, deployment, and service

    kubectl apply -f cpu.yaml
    

(Optional) Hardware Acceleration

Hardware acceleration in Kubernetes requires NVIDIA's k8s-device-plugin which is deployed in Kubernetes in form of daemonset. Follow the link for more details.

Once configured, create a GPU enabled Ollama deployment.

kubectl apply -f gpu.yaml

Test

  1. Port forward the Ollama service to connect and use it locally

    kubectl -n ollama port-forward service/ollama 11434:80
    
  2. Pull and run a model, for example orca-mini:3b

    ollama run orca-mini:3b