diff --git a/README.md b/README.md index 43dadd43..5adf1044 100644 --- a/README.md +++ b/README.md @@ -147,7 +147,7 @@ curl -X POST http://localhost:11434/api/generate -d '{ ## Tools using Ollama -- [LangChain](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa) integration - Set up all local, JS-based retrival + QA over docs in 5 minutes. +- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with a question-answering [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa). - [Continue](https://github.com/continuedev/continue) - embeds Ollama inside Visual Studio Code. The extension lets you highlight code to add to the prompt, ask questions in the sidebar, and generate code inline. - [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord. - [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast.