From 92578798bb1abcedd6bc99479d804f32d9ee2f6c Mon Sep 17 00:00:00 2001 From: Jeffrey Morgan Date: Wed, 11 Oct 2023 19:24:06 -0400 Subject: [PATCH] fix relative links in `README.md` --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index caca32f6..d24d46fe 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ Get up and running with large language models locally. ### macOS -[Download](https://ollama.ai/download/Ollama-darwin.zip) +[Download](https://ollama.ai/download/Ollama-darwin.zip) ### Linux & WSL2 @@ -23,7 +23,7 @@ curl https://ollama.ai/install.sh | sh [Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md) -### Windows +### Windows coming soon @@ -37,7 +37,7 @@ ollama run llama2 ## Model library -Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library "ollama model library") +Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library') Here are some example open-source models that can be downloaded: @@ -66,13 +66,13 @@ Ollama supports importing GGUF and GGML file formats in the Modelfile. This mean FROM ./vicuna-33b.Q4_0.gguf ``` -3. Create the model in Ollama +2. Create the model in Ollama ``` ollama create name -f path_to_modelfile ``` -5. Run the model +3. Run the model ``` ollama run name @@ -109,7 +109,7 @@ ollama run mario Hello! It's your friend Mario. ``` -For more examples, see the [examples](./examples) directory. For more information on working with a Modelfile, see the [Modelfile](./docs/modelfile.md) documentation. +For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation. ## CLI Reference @@ -195,7 +195,7 @@ Finally, in a separate shell, run a model: ## REST API -> See the [API documentation](./docs/api.md) for all endpoints. +> See the [API documentation](docs/api.md) for all endpoints. Ollama has an API for running and managing models. For example to generate text from a model: