Michael Chiang
c149fc3143
Update README.md
2023-08-16 22:54:55 -04:00
Michael Chiang
afbc763dac
adding link to models directly available on ollama ( #366 )
...
- adding link to models directly available on ollama
- ability to push your own models to the library will come in the future
2023-08-16 22:53:27 -04:00
Michael Yang
5dfe91be8b
reimplement chunked uploads
2023-08-16 14:50:24 -07:00
Michael Yang
9f944c00f1
push: retry on unauthorized
2023-08-16 11:35:33 -07:00
Michael Yang
56e87cecb1
images: remove body copies
2023-08-16 10:30:41 -07:00
Jeffrey Morgan
5ee6116420
set default OLLAMA_HOST
to
http://localhost:11434
2023-08-16 12:22:59 -04:00
Michael Yang
5d9a4cd251
Merge pull request #348 from jmorganca/cross-repo-mount
...
cross repo blob mount
2023-08-16 09:20:36 -07:00
Michael Yang
0ebec07569
Merge pull request #345 from jmorganca/exit-non-zero
...
set non-zero error code on error
2023-08-16 09:20:28 -07:00
Matt Williams
08265515b3
Merge pull request #303 from jmorganca/matt/dockerit
...
DockerIt example
2023-08-16 08:04:34 -07:00
Blake Mizerany
67e593e355
cmd: support OLLAMA_CLIENT_HOST environment variable ( #262 )
...
* cmd: support OLLAMA_HOST environment variable
This commit adds support for the OLLAMA_HOST environment
variable. This variable can be used to specify the host to which
the client should connect. This is useful when the client is
running somewhere other than the host where the server is running.
The new api.FromEnv function is used to read configure clients from the
environment. Clients wishing to use the environment variable being
consistent with the Ollama CLI can use this new function.
* Update api/client.go
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
* Update api/client.go
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
---------
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
2023-08-16 11:03:48 -04:00
Jeffrey Morgan
d15c7622b9
Update orca
to orca-mini
in README.md
2023-08-15 21:10:28 -04:00
Bruce MacDonald
1deb35ca64
use loaded llm for generating model file embeddings
2023-08-15 16:12:02 -03:00
Bruce MacDonald
e2de886831
do not regenerate embeddings
2023-08-15 16:10:22 -03:00
Bruce MacDonald
f0d7c2f5ea
retry download on network errors
2023-08-15 15:07:19 -03:00
Bruce MacDonald
12052a7624
always remove from in progress map on download
2023-08-15 13:20:32 -03:00
Bruce MacDonald
23e1da778d
Add context to api docs
2023-08-15 11:43:22 -03:00
Bruce MacDonald
326de48930
use loaded llm for embeddings
2023-08-15 10:50:54 -03:00
Bruce MacDonald
18f2cb0472
dont log fatal
2023-08-15 10:39:59 -03:00
Bruce MacDonald
53bc36d207
Update modelfile.md
2023-08-15 09:23:36 -03:00
Michael Yang
4dcf5c3e0b
Merge pull request #349 from jmorganca/close-files
...
close open files
2023-08-14 16:15:58 -07:00
Michael Yang
d1b2f532b9
Merge pull request #350 from jmorganca/update-llama-cpp
...
update llama.cpp
2023-08-14 16:15:51 -07:00
Michael Yang
e26085b921
close open files
2023-08-14 16:08:06 -07:00
Michael Yang
f7b613332c
update llama.cpp
2023-08-14 15:47:00 -07:00
Michael Yang
f594c8eb91
cross repo mount
2023-08-14 15:07:35 -07:00
Michael Yang
76b85bc0e9
set non-zero error code on error
2023-08-14 14:09:58 -07:00
Bruce MacDonald
af98a1773f
update python example
2023-08-14 16:38:44 -03:00
Bruce MacDonald
9ae9a89883
Update modelfile.md
2023-08-14 16:26:53 -03:00
Bruce MacDonald
648f0974c6
python example
2023-08-14 15:27:13 -03:00
Bruce MacDonald
fc5230dffa
Add context to api docs
2023-08-14 15:23:24 -03:00
Bruce MacDonald
2ab20095b3
log embedding eval timing
2023-08-14 12:15:55 -04:00
Bruce MacDonald
f020e1d519
always remove from in progress map on download
2023-08-14 13:09:20 -03:00
Bruce MacDonald
4b2d366c37
Update llama.go
2023-08-14 12:55:50 -03:00
Bruce MacDonald
56fd4e4ef2
log embedding eval timing
2023-08-14 12:51:31 -03:00
Bruce MacDonald
2c8b680b03
use file info for embeddings cache
2023-08-14 12:11:04 -03:00
Bruce MacDonald
99b6b60085
use model bin digest for embed digest
2023-08-14 11:57:12 -03:00
Bruce MacDonald
74f00474e1
Merge pull request #340 from gusanmaz/main
...
Update langchainpy.md
2023-08-14 09:38:42 -04:00
Bruce MacDonald
e9a9580bdd
do not regenerate embeddings
...
- re-use previously evaluated embeddings when possible
- change embeddings digest identifier to be based on model name and embedded file path
2023-08-14 10:34:17 -03:00
Güvenç Usanmaz
4c33a9ac67
Update langchainpy.md
...
base_url value for Ollama object creation is corrected.
2023-08-14 12:12:56 +03:00
Jeffrey Morgan
22885aeaee
update llama.cpp
to f64d44a
2023-08-12 22:47:15 -04:00
Jeffrey Morgan
ed969d2a06
add LiteLLM
to README.md
2023-08-12 20:47:57 -04:00
Patrick Devine
d9cf18e28d
add maximum retries when pushing ( #334 )
2023-08-11 15:41:55 -07:00
Jeffrey Morgan
1556162c90
create .ollama
directory if it doesnt exist
2023-08-11 15:35:55 -07:00
Jeffrey Morgan
148f0225c0
create .ollama
directory if it doesnt exist
2023-08-11 15:33:11 -07:00
Matt Williams
4e07941b1e
Merge pull request #329 from jmorganca/matt/tutorials
...
Add tutorials for using Langchain with ollama
2023-08-11 15:19:39 -07:00
Matt Williams
202c29c21a
resolving bmacd comment
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-11 13:51:44 -07:00
Matt Williams
c1c871620a
Update docs/tutorials/langchainjs.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-08-11 13:48:46 -07:00
Matt Williams
a21a8bef56
Update docs/tutorials/langchainjs.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-08-11 13:48:35 -07:00
Matt Williams
522726228a
Update docs/tutorials.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-08-11 13:48:16 -07:00
Patrick Devine
9770e3b325
Generate private/public keypair for use w/ auth ( #324 )
2023-08-11 10:58:23 -07:00
Michael Yang
d617823355
Merge pull request #333 from jmorganca/off-by-one
...
ggml: fix off by one error
2023-08-11 10:51:06 -07:00