docs: remove tutorials, add cloud section to community integrations (#7784)
This commit is contained in:
parent
b7bddeebc1
commit
27d9c749d5
5 changed files with 10 additions and 263 deletions
13
README.md
13
README.md
|
@ -316,8 +316,8 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
||||||
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in discord )
|
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in discord )
|
||||||
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
|
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
|
||||||
- [R2R](https://github.com/SciPhi-AI/R2R) (Open-source RAG engine)
|
- [R2R](https://github.com/SciPhi-AI/R2R) (Open-source RAG engine)
|
||||||
- [Ollama-Kis](https://github.com/elearningshow/ollama-kis) (A simple easy to use GUI with sample custom LLM for Drivers Education)
|
- [Ollama-Kis](https://github.com/elearningshow/ollama-kis) (A simple easy to use GUI with sample custom LLM for Drivers Education)
|
||||||
- [OpenGPA](https://opengpa.org) (Open-source offline-first Enterprise Agentic Application)
|
- [OpenGPA](https://opengpa.org) (Open-source offline-first Enterprise Agentic Application)
|
||||||
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
|
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
|
||||||
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
|
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
|
||||||
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
|
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
|
||||||
|
@ -350,9 +350,15 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
||||||
- [OpenTalkGpt](https://github.com/adarshM84/OpenTalkGpt)
|
- [OpenTalkGpt](https://github.com/adarshM84/OpenTalkGpt)
|
||||||
- [VT](https://github.com/vinhnx/vt.ai) (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
|
- [VT](https://github.com/vinhnx/vt.ai) (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
|
||||||
- [Nosia](https://github.com/nosia-ai/nosia) (Easy to install and use RAG platform based on Ollama)
|
- [Nosia](https://github.com/nosia-ai/nosia) (Easy to install and use RAG platform based on Ollama)
|
||||||
- [Witsy](https://github.com/nbonamy/witsy) (An AI Desktop application avaiable for Mac/Windows/Linux)
|
- [Witsy](https://github.com/nbonamy/witsy) (An AI Desktop application avaiable for Mac/Windows/Linux)
|
||||||
- [Abbey](https://github.com/US-Artificial-Intelligence/abbey) (A configurable AI interface server with notebooks, document storage, and YouTube support)
|
- [Abbey](https://github.com/US-Artificial-Intelligence/abbey) (A configurable AI interface server with notebooks, document storage, and YouTube support)
|
||||||
|
|
||||||
|
### Cloud
|
||||||
|
|
||||||
|
- [Google Cloud](https://cloud.google.com/run/docs/tutorials/gpu-gemma2-with-ollama)
|
||||||
|
- [Fly.io](https://fly.io/docs/python/do-more/add-ollama/)
|
||||||
|
- [Koyeb](https://www.koyeb.com/deploy/ollama)
|
||||||
|
|
||||||
### Terminal
|
### Terminal
|
||||||
|
|
||||||
- [oterm](https://github.com/ggozad/oterm)
|
- [oterm](https://github.com/ggozad/oterm)
|
||||||
|
@ -385,6 +391,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
||||||
- [orbiton](https://github.com/xyproto/orbiton) Configuration-free text editor and IDE with support for tab completion with Ollama.
|
- [orbiton](https://github.com/xyproto/orbiton) Configuration-free text editor and IDE with support for tab completion with Ollama.
|
||||||
|
|
||||||
### Apple Vision Pro
|
### Apple Vision Pro
|
||||||
|
|
||||||
- [Enchanted](https://github.com/AugustDev/enchanted)
|
- [Enchanted](https://github.com/AugustDev/enchanted)
|
||||||
|
|
||||||
### Database
|
### Database
|
||||||
|
|
|
@ -1,83 +0,0 @@
|
||||||
# Running Ollama on Fly.io GPU Instances
|
|
||||||
|
|
||||||
Ollama runs with little to no configuration on [Fly.io GPU instances](https://fly.io/docs/gpus/gpu-quickstart/). If you don't have access to GPUs yet, you'll need to [apply for access](https://fly.io/gpu/) on the waitlist. Once you're accepted, you'll get an email with instructions on how to get started.
|
|
||||||
|
|
||||||
Create a new app with `fly apps create`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
fly apps create
|
|
||||||
```
|
|
||||||
|
|
||||||
Then create a `fly.toml` file in a new folder that looks like this:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
app = "sparkling-violet-709"
|
|
||||||
primary_region = "ord"
|
|
||||||
vm.size = "a100-40gb" # see https://fly.io/docs/gpus/gpu-quickstart/ for more info
|
|
||||||
|
|
||||||
[build]
|
|
||||||
image = "ollama/ollama"
|
|
||||||
|
|
||||||
[http_service]
|
|
||||||
internal_port = 11434
|
|
||||||
force_https = false
|
|
||||||
auto_stop_machines = true
|
|
||||||
auto_start_machines = true
|
|
||||||
min_machines_running = 0
|
|
||||||
processes = ["app"]
|
|
||||||
|
|
||||||
[mounts]
|
|
||||||
source = "models"
|
|
||||||
destination = "/root/.ollama"
|
|
||||||
initial_size = "100gb"
|
|
||||||
```
|
|
||||||
|
|
||||||
Then create a [new private IPv6 address](https://fly.io/docs/reference/private-networking/#flycast-private-load-balancing) for your app:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
fly ips allocate-v6 --private
|
|
||||||
```
|
|
||||||
|
|
||||||
Then deploy your app:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
fly deploy
|
|
||||||
```
|
|
||||||
|
|
||||||
And finally you can access it interactively with a new Fly.io Machine:
|
|
||||||
|
|
||||||
```
|
|
||||||
fly machine run -e OLLAMA_HOST=http://your-app-name.flycast --shell ollama/ollama
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ollama run openchat:7b-v3.5-fp16
|
|
||||||
>>> How do I bake chocolate chip cookies?
|
|
||||||
To bake chocolate chip cookies, follow these steps:
|
|
||||||
|
|
||||||
1. Preheat the oven to 375°F (190°C) and line a baking sheet with parchment paper or silicone baking mat.
|
|
||||||
|
|
||||||
2. In a large bowl, mix together 1 cup of unsalted butter (softened), 3/4 cup granulated sugar, and 3/4
|
|
||||||
cup packed brown sugar until light and fluffy.
|
|
||||||
|
|
||||||
3. Add 2 large eggs, one at a time, to the butter mixture, beating well after each addition. Stir in 1
|
|
||||||
teaspoon of pure vanilla extract.
|
|
||||||
|
|
||||||
4. In a separate bowl, whisk together 2 cups all-purpose flour, 1/2 teaspoon baking soda, and 1/2 teaspoon
|
|
||||||
salt. Gradually add the dry ingredients to the wet ingredients, stirring until just combined.
|
|
||||||
|
|
||||||
5. Fold in 2 cups of chocolate chips (or chunks) into the dough.
|
|
||||||
|
|
||||||
6. Drop rounded tablespoons of dough onto the prepared baking sheet, spacing them about 2 inches apart.
|
|
||||||
|
|
||||||
7. Bake for 10-12 minutes, or until the edges are golden brown. The centers should still be slightly soft.
|
|
||||||
|
|
||||||
8. Allow the cookies to cool on the baking sheet for a few minutes before transferring them to a wire rack
|
|
||||||
to cool completely.
|
|
||||||
|
|
||||||
Enjoy your homemade chocolate chip cookies!
|
|
||||||
```
|
|
||||||
|
|
||||||
When you set it up like this, it will automatically turn off when you're done using it. Then when you access it again, it will automatically turn back on. This is a great way to save money on GPU instances when you're not using them. If you want a persistent wake-on-use connection to your Ollama instance, you can set up a [connection to your Fly network using WireGuard](https://fly.io/docs/reference/private-networking/#discovering-apps-through-dns-on-a-wireguard-connection). Then you can access your Ollama instance at `http://your-app-name.flycast`.
|
|
||||||
|
|
||||||
And that's it!
|
|
|
@ -1,77 +0,0 @@
|
||||||
# Using LangChain with Ollama using JavaScript
|
|
||||||
|
|
||||||
In this tutorial, we are going to use JavaScript with LangChain and Ollama to learn about something just a touch more recent. In August 2023, there was a series of wildfires on Maui. There is no way an LLM trained before that time can know about this, since their training data would not include anything as recent as that. So we can find the [Wikipedia article about the fires](https://en.wikipedia.org/wiki/2023_Hawaii_wildfires) and ask questions about the contents.
|
|
||||||
|
|
||||||
To get started, let's just use **LangChain** to ask a simple question to a model. To do this with JavaScript, we need to install **LangChain**:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install @langchain/community
|
|
||||||
```
|
|
||||||
|
|
||||||
Now we can start building out our JavaScript:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
import { Ollama } from "@langchain/community/llms/ollama";
|
|
||||||
|
|
||||||
const ollama = new Ollama({
|
|
||||||
baseUrl: "http://localhost:11434",
|
|
||||||
model: "llama3.2",
|
|
||||||
});
|
|
||||||
|
|
||||||
const answer = await ollama.invoke(`why is the sky blue?`);
|
|
||||||
|
|
||||||
console.log(answer);
|
|
||||||
```
|
|
||||||
|
|
||||||
That will get us the same thing as if we ran `ollama run llama3.2 "why is the sky blue"` in the terminal. But we want to load a document from the web to ask a question against. **Cheerio** is a great library for ingesting a webpage, and **LangChain** uses it in their **CheerioWebBaseLoader**. So let's install **Cheerio** and build that part of the app.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install cheerio
|
|
||||||
```
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";
|
|
||||||
|
|
||||||
const loader = new CheerioWebBaseLoader("https://en.wikipedia.org/wiki/2023_Hawaii_wildfires");
|
|
||||||
const data = await loader.load();
|
|
||||||
```
|
|
||||||
|
|
||||||
That will load the document. Although this page is smaller than the Odyssey, it is certainly bigger than the context size for most LLMs. So we are going to need to split into smaller pieces, and then select just the pieces relevant to our question. This is a great use for a vector datastore. In this example, we will use the **MemoryVectorStore** that is part of **LangChain**. But there is one more thing we need to get the content into the datastore. We have to run an embeddings process that converts the tokens in the text into a series of vectors. And for that, we are going to use **Tensorflow**. There is a lot of stuff going on in this one. First, install the **Tensorflow** components that we need.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
npm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-node@4.10.0
|
|
||||||
```
|
|
||||||
|
|
||||||
If you just install those components without the version numbers, it will install the latest versions, but there are conflicts within **Tensorflow**, so you need to install the compatible versions.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"
|
|
||||||
import { MemoryVectorStore } from "langchain/vectorstores/memory";
|
|
||||||
import "@tensorflow/tfjs-node";
|
|
||||||
import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";
|
|
||||||
|
|
||||||
// Split the text into 500 character chunks. And overlap each chunk by 20 characters
|
|
||||||
const textSplitter = new RecursiveCharacterTextSplitter({
|
|
||||||
chunkSize: 500,
|
|
||||||
chunkOverlap: 20
|
|
||||||
});
|
|
||||||
const splitDocs = await textSplitter.splitDocuments(data);
|
|
||||||
|
|
||||||
// Then use the TensorFlow Embedding to store these chunks in the datastore
|
|
||||||
const vectorStore = await MemoryVectorStore.fromDocuments(splitDocs, new TensorFlowEmbeddings());
|
|
||||||
```
|
|
||||||
|
|
||||||
To connect the datastore to a question asked to a LLM, we need to use the concept at the heart of **LangChain**: the chain. Chains are a way to connect a number of activities together to accomplish a particular tasks. There are a number of chain types available, but for this tutorial we are using the **RetrievalQAChain**.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
import { RetrievalQAChain } from "langchain/chains";
|
|
||||||
|
|
||||||
const retriever = vectorStore.asRetriever();
|
|
||||||
const chain = RetrievalQAChain.fromLLM(ollama, retriever);
|
|
||||||
const result = await chain.call({query: "When was Hawaii's request for a major disaster declaration approved?"});
|
|
||||||
console.log(result.text)
|
|
||||||
```
|
|
||||||
|
|
||||||
So we created a retriever, which is a way to return the chunks that match a query from a datastore. And then connect the retriever and the model via a chain. Finally, we send a query to the chain, which results in an answer using our document as a source. The answer it returned was correct, August 10, 2023.
|
|
||||||
|
|
||||||
And that is a simple introduction to what you can do with **LangChain** and **Ollama.**
|
|
|
@ -1,85 +0,0 @@
|
||||||
# Using LangChain with Ollama in Python
|
|
||||||
|
|
||||||
Let's imagine we are studying the classics, such as **the Odyssey** by **Homer**. We might have a question about Neleus and his family. If you ask llama2 for that info, you may get something like:
|
|
||||||
|
|
||||||
> I apologize, but I'm a large language model, I cannot provide information on individuals or families that do not exist in reality. Neleus is not a real person or character, and therefore does not have a family or any other personal details. My apologies for any confusion. Is there anything else I can help you with?
|
|
||||||
|
|
||||||
This sounds like a typical censored response, but even llama2-uncensored gives a mediocre answer:
|
|
||||||
|
|
||||||
> Neleus was a legendary king of Pylos and the father of Nestor, one of the Argonauts. His mother was Clymene, a sea nymph, while his father was Neptune, the god of the sea.
|
|
||||||
|
|
||||||
So let's figure out how we can use **LangChain** with Ollama to ask our question to the actual document, the Odyssey by Homer, using Python.
|
|
||||||
|
|
||||||
Let's start by asking a simple question that we can get an answer to from the **Llama3** model using **Ollama**. First, we need to install the **LangChain** package:
|
|
||||||
|
|
||||||
`pip install langchain_community`
|
|
||||||
|
|
||||||
Then we can create a model and ask the question:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain_community.llms import Ollama
|
|
||||||
ollama = Ollama(
|
|
||||||
base_url='http://localhost:11434',
|
|
||||||
model="llama3"
|
|
||||||
)
|
|
||||||
print(ollama.invoke("why is the sky blue"))
|
|
||||||
```
|
|
||||||
|
|
||||||
Notice that we are defining the model and the base URL for Ollama.
|
|
||||||
|
|
||||||
Now let's load a document to ask questions against. I'll load up the Odyssey by Homer, which you can find at Project Gutenberg. We will need **WebBaseLoader** which is part of **LangChain** and loads text from any webpage. On my machine, I also needed to install **bs4** to get that to work, so run `pip install bs4`.
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.document_loaders import WebBaseLoader
|
|
||||||
loader = WebBaseLoader("https://www.gutenberg.org/files/1727/1727-h/1727-h.htm")
|
|
||||||
data = loader.load()
|
|
||||||
```
|
|
||||||
|
|
||||||
This file is pretty big. Just the preface is 3000 tokens. Which means the full document won't fit into the context for the model. So we need to split it up into smaller pieces.
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
|
||||||
|
|
||||||
text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
|
|
||||||
all_splits = text_splitter.split_documents(data)
|
|
||||||
```
|
|
||||||
|
|
||||||
It's split up, but we have to find the relevant splits and then submit those to the model. We can do this by creating embeddings and storing them in a vector database. We can use Ollama directly to instantiate an embedding model. We will use ChromaDB in this example for a vector database. `pip install chromadb`
|
|
||||||
We also need to pull embedding model: `ollama pull nomic-embed-text`
|
|
||||||
```python
|
|
||||||
from langchain.embeddings import OllamaEmbeddings
|
|
||||||
from langchain.vectorstores import Chroma
|
|
||||||
oembed = OllamaEmbeddings(base_url="http://localhost:11434", model="nomic-embed-text")
|
|
||||||
vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed)
|
|
||||||
```
|
|
||||||
|
|
||||||
Now let's ask a question from the document. **Who was Neleus, and who is in his family?** Neleus is a character in the Odyssey, and the answer can be found in our text.
|
|
||||||
|
|
||||||
```python
|
|
||||||
question="Who is Neleus and who is in Neleus' family?"
|
|
||||||
docs = vectorstore.similarity_search(question)
|
|
||||||
len(docs)
|
|
||||||
```
|
|
||||||
|
|
||||||
This will output the number of matches for chunks of data similar to the search.
|
|
||||||
|
|
||||||
The next thing is to send the question and the relevant parts of the docs to the model to see if we can get a good answer. But we are stitching two parts of the process together, and that is called a chain. This means we need to define a chain:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.chains import RetrievalQA
|
|
||||||
qachain=RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever())
|
|
||||||
res = qachain.invoke({"query": question})
|
|
||||||
print(res['result'])
|
|
||||||
```
|
|
||||||
|
|
||||||
The answer received from this chain was:
|
|
||||||
|
|
||||||
> Neleus is a character in Homer's "Odyssey" and is mentioned in the context of Penelope's suitors. Neleus is the father of Chloris, who is married to Neleus and bears him several children, including Nestor, Chromius, Periclymenus, and Pero. Amphinomus, the son of Nisus, is also mentioned as a suitor of Penelope and is known for his good natural disposition and agreeable conversation.
|
|
||||||
|
|
||||||
It's not a perfect answer, as it implies Neleus married his daughter when actually Chloris "was the youngest daughter to Amphion son of Iasus and king of Minyan Orchomenus, and was Queen in Pylos".
|
|
||||||
|
|
||||||
I updated the chunk_overlap for the text splitter to 20 and tried again and got a much better answer:
|
|
||||||
|
|
||||||
> Neleus is a character in Homer's epic poem "The Odyssey." He is the husband of Chloris, who is the youngest daughter of Amphion son of Iasus and king of Minyan Orchomenus. Neleus has several children with Chloris, including Nestor, Chromius, Periclymenus, and Pero.
|
|
||||||
|
|
||||||
And that is a much better answer.
|
|
|
@ -1,15 +0,0 @@
|
||||||
# Running Ollama on NVIDIA Jetson Devices
|
|
||||||
|
|
||||||
Ollama runs well on [NVIDIA Jetson Devices](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) and should run out of the box with the standard installation instructions.
|
|
||||||
|
|
||||||
The following has been tested on [JetPack 5.1.2](https://developer.nvidia.com/embedded/jetpack), but should also work on JetPack 6.0.
|
|
||||||
|
|
||||||
- Install Ollama via standard Linux command (ignore the 404 error): `curl https://ollama.com/install.sh | sh`
|
|
||||||
- Pull the model you want to use (e.g. mistral): `ollama pull mistral`
|
|
||||||
- Start an interactive session: `ollama run mistral`
|
|
||||||
|
|
||||||
And that's it!
|
|
||||||
|
|
||||||
# Running Ollama in Docker
|
|
||||||
|
|
||||||
When running GPU accelerated applications in Docker, it is highly recommended to use [dusty-nv jetson-containers repo](https://github.com/dusty-nv/jetson-containers).
|
|
Loading…
Reference in a new issue