Tweaks to README.md (#906)

* Mentioned Docker Hub in docs
* Consolidated brew installs to one line
This commit is contained in:
James Braza 2023-10-27 00:10:23 -07:00 committed by GitHub
parent 2665f3c28e
commit e5d1ce4dde
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -29,7 +29,8 @@ curl https://ollama.ai/install.sh | sh
### Docker
See the official [Docker image](https://hub.docker.com/r/ollama/ollama).
The official [Ollama Docker image `ollama/ollama`](https://hub.docker.com/r/ollama/ollama)
is available on Docker Hub.
## Quickstart
@ -178,8 +179,7 @@ ollama list
Install `cmake` and `go`:
```
brew install cmake
brew install go
brew install cmake go
```
Then generate dependencies and build:
@ -203,9 +203,8 @@ Finally, in a separate shell, run a model:
## REST API
See the [API documentation](docs/api.md) for all endpoints.
Ollama has an API for running and managing models. For example to generate text from a model:
Ollama has a REST API for running and managing models.
For example, to generate text from a model:
```
curl -X POST http://localhost:11434/api/generate -d '{
@ -214,6 +213,8 @@ curl -X POST http://localhost:11434/api/generate -d '{
}'
```
See the [API documentation](./docs/api.md) for all endpoints.
## Community Integrations
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)