Michael Yang
1901044b07
use checksum reference
2023-11-15 15:16:23 -08:00
Michael Yang
d660eebf22
fix create from model tag
2023-11-15 15:16:23 -08:00
Michael Yang
cac11c9137
update api docs
2023-11-15 15:16:23 -08:00
Michael Yang
a07c935d34
ignore non blobs
2023-11-15 15:16:23 -08:00
Michael Yang
1552cee59f
client create modelfile
2023-11-15 15:16:23 -08:00
Michael Yang
3ca56b5ada
add create modelfile field
2023-11-15 15:16:23 -08:00
Michael Yang
b0d14ed51c
refactor create model
2023-11-15 15:16:23 -08:00
Matt Williams
f61f340279
FAQ: answer a few faq questions ( #1128 )
...
* faq: does ollama share my prompts
Signed-off-by: Matt Williams <m@technovangelist.com>
* faq: ollama and openai
Signed-off-by: Matt Williams <m@technovangelist.com>
* faq: vscode plugins
Signed-off-by: Matt Williams <m@technovangelist.com>
* faq: send a doc to Ollama
Signed-off-by: Matt Williams <m@technovangelist.com>
* extra spacing
Signed-off-by: Matt Williams <m@technovangelist.com>
* Update faq.md
* Update faq.md
---------
Signed-off-by: Matt Williams <m@technovangelist.com>
Co-authored-by: Michael <mchiang0610@users.noreply.github.com>
2023-11-15 18:05:13 -05:00
Michael Yang
686f85d6ca
Merge pull request #1132 from jmorganca/mxyng/human-bytes
...
replace go-humanize with format.HumanBytes
2023-11-15 09:46:21 -08:00
bnodnarb
85951d25ef
Created tutorial for running Ollama on NVIDIA Jetson devices ( #1098 )
2023-11-15 12:32:37 -05:00
Michael Yang
01ea6002c4
replace go-humanize with format.HumanBytes
2023-11-14 14:57:41 -08:00
Jeffrey Morgan
423862042a
treat ollama run model < file
as entire prompt, not prompt-per-line ( #1126 )
...
Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`.
Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt.
Co-authored-by: Quinn Slack <quinn@slack.org>
2023-11-14 16:42:21 -05:00
Bruce MacDonald
df18486c35
Move /generate format to optional parameters ( #1127 )
...
This field is optional and should be under the `Advanced parameters` header
2023-11-14 16:12:30 -05:00
Jeffrey Morgan
4e612a2e92
use stdout fd for terminal size ( #1125 )
2023-11-14 16:09:09 -05:00
Jeffrey Morgan
6e0f686afa
--format json
should work in interactive mode
2023-11-14 10:22:03 -05:00
Jeffrey Morgan
c1844bbee2
add json mode to cli ( #1095 )
2023-11-13 21:54:02 -05:00
Huy Le
cb745965ce
adding ollama.nvim for visibility ( #1115 )
2023-11-13 17:00:17 -05:00
Enrico Ros
8d29b6a2b6
New big-AGI integration ( #1078 )
...
* New big-AGI integration
Ollama works great in big-AGI, and this document explains how to link the two projects.
* Update README.md
2023-11-13 16:59:00 -05:00
Ilya Breitburg
724aa64bee
Add Dart library to README.md ( #1106 )
2023-11-13 14:50:42 -05:00
Michael Yang
d91c103e74
Merge pull request #1055 from dansreis/946-fix-incorrect-base-model-name
...
Fixed incorrect base model name
2023-11-13 08:42:55 -08:00
Kevin Hermawan
98ec7d81e3
Add OllamaKit to the community integrations ( #1085 )
2023-11-11 14:41:42 -08:00
Daniel Reis
7c438f2c53
Replaced method
2023-11-10 20:22:03 +00:00
Daniel Reis
6e46338d44
Reverting previous changes
2023-11-10 20:21:35 +00:00
Jeffrey Morgan
cdddd3df65
add format
to example python client
2023-11-10 10:22:21 -08:00
Daniel Hiltgen
afa61bdf45
Merge pull request #1075 from jmorganca/dhiltgen/unexpected-eof
...
Resume chunk download on UnexpectedEOF errors
2023-11-10 08:48:27 -08:00
Daniel Hiltgen
cc54a416c6
Resume chunk download on UnexpectedEOF errors
...
If the chunk download is interrupted, resume from where we left off
2023-11-10 08:29:42 -08:00
Matt Williams
c819d7f68a
Merge pull request #955 from jmorganca/mattw/example-bash-compare
...
docs: add examples using bash to compare models
2023-11-10 08:59:32 -06:00
Jeffrey Morgan
5cba29b9d6
JSON mode: add `"format" as an api parameter ( #1051 )
...
* add `"format": "json"` as an API parameter
---------
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-11-09 16:44:02 -08:00
Daniel Reis
d17730356a
Removed inline parse model path
2023-11-09 22:44:26 +00:00
Daniel Reis
32d79a6eea
Using 'GetShortTagname' method instead
2023-11-09 22:40:37 +00:00
Bruce MacDonald
5b39503bcd
document specifying multiple stop params ( #1061 )
2023-11-09 13:16:26 -08:00
Bruce MacDonald
1ae84bc2a2
skip gpu if less than 2GB VRAM are available ( #1059 )
2023-11-09 13:16:16 -08:00
Bruce MacDonald
db8bf336fc
Update README.md
2023-11-09 12:53:24 -08:00
Nick Anderson
d77e094a90
Added gptel to list of integrations ( #1062 )
2023-11-09 12:52:36 -08:00
Matt Williams
dd3dc47ddb
Merge pull request #992 from aashish2057/aashish2057/langchainjs_doc_update
2023-11-09 05:08:31 -08:00
Michael Yang
c5e1bbabda
instead of static number of parameters for each model family, get the real number from the tensors ( #1022 )
...
* parse tensor info
* refactor decoder
* return actual parameter count
* explicit rounding
* s/Human/HumanNumber/
2023-11-08 17:55:46 -08:00
Bruce MacDonald
a49d6acc1e
add a complete /generate options example ( #1035 )
2023-11-08 16:44:36 -08:00
Moritz Poldrack
6e9bcdb9b3
progressbar: make start and end seamless ( #1042 )
2023-11-08 16:42:40 -08:00
Matt Williams
13086363bd
Update as per bmacd
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-11-08 18:09:05 -06:00
Bruce MacDonald
ec2a31e9b3
support raw generation requests ( #952 )
...
- add the optional `raw` generate request parameter to bypass prompt formatting and response context
-add raw request to docs
2023-11-08 14:05:02 -08:00
Amith Koujalgi
ec84c02d54
Add Ollama4j Java library to the list of community libraries ( #1044 )
2023-11-08 11:04:32 -08:00
Kevin Hermawan
2a88b66bc9
Add Ollamac to community integrations ( #1043 )
2023-11-08 11:01:09 -08:00
Jeffrey Morgan
2d0faea96c
clean up README.md
2023-11-08 00:03:29 -08:00
Jeffrey Morgan
637142181a
clean up README.md
2023-11-07 23:52:31 -08:00
Matt Williams
bcbff421c9
Merge pull request #1023 from jmorganca/mattw/wherearemodelsfaq
2023-11-07 17:59:54 -08:00
thealhu
1359d6cf3b
Fix sudo variable in install.sh ( #1034 )
...
It was forgotten to replace sudo at one place with the variable for sudo.
2023-11-07 09:59:57 -08:00
Omar Magdy
6e2d0224d9
Added logseq ollama plugin ( #1029 )
2023-11-07 09:58:13 -08:00
Ikko Eltociear Ashimine
921406f721
Update client.py ( #1026 )
...
recieve -> receive
2023-11-07 09:55:47 -08:00
Michael Yang
c7047d7353
Merge pull request #959 from jmorganca/mxyng/example-k8s
2023-11-07 10:43:21 -06:00
Matt Williams
1d155caba3
docs: clarify where the models are stored in the faq
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-11-06 14:38:49 -08:00