docs: format with prettier
This commit is contained in:
parent
34a88cd776
commit
1f78e409b4
2 changed files with 8 additions and 11 deletions
|
@ -61,6 +61,7 @@ Pull a base model:
|
||||||
```
|
```
|
||||||
ollama pull llama2
|
ollama pull llama2
|
||||||
```
|
```
|
||||||
|
|
||||||
> To update a model to the latest version, run `ollama pull llama2` again. The model will be updated (if necessary).
|
> To update a model to the latest version, run `ollama pull llama2` again. The model will be updated (if necessary).
|
||||||
|
|
||||||
Create a `Modelfile`:
|
Create a `Modelfile`:
|
||||||
|
@ -151,7 +152,7 @@ curl -X POST http://localhost:11434/api/create -d '{"name": "my-model", "path":
|
||||||
|
|
||||||
## Tools using Ollama
|
## Tools using Ollama
|
||||||
|
|
||||||
- [LangChain](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa) integration - Set up all local, JS-based retrival + QA over docs in 5 minutes.
|
- [LangChain](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa) integration - Set up all local, JS-based retrival + QA over docs in 5 minutes.
|
||||||
- [Continue](https://github.com/continuedev/continue) - embeds Ollama inside Visual Studio Code. The extension lets you highlight code to add to the prompt, ask questions in the sidebar, and generate code inline.
|
- [Continue](https://github.com/continuedev/continue) - embeds Ollama inside Visual Studio Code. The extension lets you highlight code to add to the prompt, ask questions in the sidebar, and generate code inline.
|
||||||
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord.
|
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot) - interact with Ollama as a chatbot on Discord.
|
||||||
- [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast.
|
- [Raycast Ollama](https://github.com/MassimilianoPasquini97/raycast_ollama) - Raycast extension to use Ollama for local llama inference on Raycast.
|
||||||
|
|
|
@ -30,19 +30,15 @@ Now you can run `ollama`:
|
||||||
|
|
||||||
To release a new version of Ollama you'll need to set some environment variables:
|
To release a new version of Ollama you'll need to set some environment variables:
|
||||||
|
|
||||||
* `GITHUB_TOKEN`: your GitHub token
|
- `GITHUB_TOKEN`: your GitHub token
|
||||||
* `APPLE_IDENTITY`: the Apple signing identity (macOS only)
|
- `APPLE_IDENTITY`: the Apple signing identity (macOS only)
|
||||||
* `APPLE_ID`: your Apple ID
|
- `APPLE_ID`: your Apple ID
|
||||||
* `APPLE_PASSWORD`: your Apple ID app-specific password
|
- `APPLE_PASSWORD`: your Apple ID app-specific password
|
||||||
* `APPLE_TEAM_ID`: the Apple team ID for the signing identity
|
- `APPLE_TEAM_ID`: the Apple team ID for the signing identity
|
||||||
* `TELEMETRY_WRITE_KEY`: segment write key for telemetry
|
- `TELEMETRY_WRITE_KEY`: segment write key for telemetry
|
||||||
|
|
||||||
Then run the publish script with the target version:
|
Then run the publish script with the target version:
|
||||||
|
|
||||||
```
|
```
|
||||||
VERSION=0.0.2 ./scripts/publish.sh
|
VERSION=0.0.2 ./scripts/publish.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue