From ed969d2a066a4ac3b63de770b46f30b79435dab1 Mon Sep 17 00:00:00 2001 From: Jeffrey Morgan Date: Sat, 12 Aug 2023 20:47:53 -0400 Subject: [PATCH] add `LiteLLM` to `README.md` --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 0828e06e..78be490d 100644 --- a/README.md +++ b/README.md @@ -156,6 +156,7 @@ curl -X POST http://localhost:11434/api/generate -d '{ - [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with a question-answering [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa). - [Continue](https://github.com/continuedev/continue) - embeds Ollama inside Visual Studio Code. The extension lets you highlight code to add to the prompt, ask questions in the sidebar, and generate code inline. +- [LiteLLM](https://github.com/BerriAI/litellm) a lightweight python package to simplify LLM API calls - [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord. - [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast. - [Simple HTML UI for Ollama](https://github.com/rtcfirefly/ollama-ui)