Jeffrey Morgan
2eaa95b417
Update api.md
2023-11-21 15:32:05 -05:00
James Braza
f24741ff39
Documenting how to view Modelfile
s ( #723 )
...
* Documented viewing Modelfiles in ollama.ai/library
* Moved Modelfile in ollama.ai down per request
2023-11-20 15:24:29 -05:00
Jeffrey Morgan
1657c6abc7
add note to specify JSON in the prompt when using JSON mode
2023-11-18 22:59:26 -05:00
Michael Yang
c82ead4d01
faq: fix heading and add more details
2023-11-17 09:02:17 -08:00
Michael Yang
90860b6a7e
update faq ( #1176 )
2023-11-17 11:42:58 -05:00
Jeffrey Morgan
81092147c4
remove unnecessary -X POST
from example curl
commands
2023-11-17 09:50:38 -05:00
Jeffrey Morgan
92656a74b7
Use llama2
as the model in api.md
2023-11-17 07:17:51 -05:00
Michael Yang
d8842b4d4b
update faq
2023-11-16 17:07:36 -08:00
Michael Yang
c13bde962d
Update docs/faq.md
...
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
2023-11-16 16:48:38 -08:00
Michael Yang
ee307937fd
update faq
2023-11-16 16:46:43 -08:00
Michael Yang
b5f158f046
add faq for proxies ( #1147 )
2023-11-16 11:43:37 -05:00
Michael Yang
77954bea0e
Merge pull request #898 from jmorganca/mxyng/build-context
...
create remote models
2023-11-15 16:41:12 -08:00
Michael Yang
54f92f01cb
update docs
2023-11-15 15:28:15 -08:00
Jeffrey Morgan
ecd71347ab
Update faq.md
2023-11-15 18:17:13 -05:00
Jeffrey Morgan
8ee4cbea0f
Remove table of contents in faq.md
2023-11-15 18:16:27 -05:00
Michael Yang
71d71d0988
update docs
2023-11-15 15:16:23 -08:00
Michael Yang
cac11c9137
update api docs
2023-11-15 15:16:23 -08:00
Matt Williams
f61f340279
FAQ: answer a few faq questions ( #1128 )
...
* faq: does ollama share my prompts
Signed-off-by: Matt Williams <m@technovangelist.com>
* faq: ollama and openai
Signed-off-by: Matt Williams <m@technovangelist.com>
* faq: vscode plugins
Signed-off-by: Matt Williams <m@technovangelist.com>
* faq: send a doc to Ollama
Signed-off-by: Matt Williams <m@technovangelist.com>
* extra spacing
Signed-off-by: Matt Williams <m@technovangelist.com>
* Update faq.md
* Update faq.md
---------
Signed-off-by: Matt Williams <m@technovangelist.com>
Co-authored-by: Michael <mchiang0610@users.noreply.github.com>
2023-11-15 18:05:13 -05:00
bnodnarb
85951d25ef
Created tutorial for running Ollama on NVIDIA Jetson devices ( #1098 )
2023-11-15 12:32:37 -05:00
Bruce MacDonald
df18486c35
Move /generate format to optional parameters ( #1127 )
...
This field is optional and should be under the `Advanced parameters` header
2023-11-14 16:12:30 -05:00
Jeffrey Morgan
5cba29b9d6
JSON mode: add `"format" as an api parameter ( #1051 )
...
* add `"format": "json"` as an API parameter
---------
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-11-09 16:44:02 -08:00
Bruce MacDonald
5b39503bcd
document specifying multiple stop params ( #1061 )
2023-11-09 13:16:26 -08:00
Matt Williams
dd3dc47ddb
Merge pull request #992 from aashish2057/aashish2057/langchainjs_doc_update
2023-11-09 05:08:31 -08:00
Bruce MacDonald
a49d6acc1e
add a complete /generate options example ( #1035 )
2023-11-08 16:44:36 -08:00
Bruce MacDonald
ec2a31e9b3
support raw generation requests ( #952 )
...
- add the optional `raw` generate request parameter to bypass prompt formatting and response context
-add raw request to docs
2023-11-08 14:05:02 -08:00
Matt Williams
1d155caba3
docs: clarify where the models are stored in the faq
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-11-06 14:38:49 -08:00
aashish2057
b13586cc72
update langchainjs doc
2023-11-03 18:45:19 -05:00
Bruce MacDonald
6109bebba6
reformat api docs for more examples ( #972 )
2023-11-03 10:57:00 -04:00
Matt Williams
f21bd6210d
docs: clarify and clean up API docs
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-10-31 13:11:33 -07:00
Dirk Loss
874bb31986
Fix conversion command for gptneox ( #948 )
2023-10-30 14:34:29 -04:00
Jeffrey Morgan
c0dcea1398
Update faq.md
2023-10-27 18:29:00 -07:00
Bruce MacDonald
5c3491f425
allow for a configurable ollama model storage directory ( #897 )
...
* allow for a configurable ollama models directory
- set OLLAMA_MODELS in the environment that ollama is running in to change where model files are stored
- update docs
Co-Authored-By: Jeffrey Morgan <jmorganca@gmail.com>
Co-Authored-By: Jay Nakrani <dhananjaynakrani@gmail.com>
Co-Authored-By: Akhil Acharya <akhilcacharya@gmail.com>
Co-Authored-By: Sasha Devol <sasha.devol@protonmail.com>
2023-10-27 10:19:59 -04:00
Michael Yang
92119de9d8
update linux.md
2023-10-25 14:57:50 -07:00
Michael Yang
53b0ba8d43
Merge pull request #893 from jmorganca/mxyng/update-faq
...
update faq
2023-10-24 16:02:35 -07:00
Michael Yang
db342691f9
Update docs/faq.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-10-24 13:59:33 -07:00
Bruce MacDonald
cecf83141e
Linux uninstall instructions ( #894 )
2023-10-24 14:07:05 -04:00
Michael Yang
a5a2adf1ec
update faq
2023-10-24 10:54:16 -07:00
Jeffrey Morgan
914428351a
Update import.md
2023-10-23 17:44:53 -07:00
Jeffrey Morgan
9afea9e3b9
Update import.md
...
Separate GGUF and PyTorch guides
2023-10-23 17:42:17 -07:00
Jeffrey Morgan
6b213216d5
Update import.md
2023-10-19 12:17:36 -04:00
Alexander F. Rødseth
b7e137323a
Fix a typo ( #818 )
2023-10-17 09:00:15 -04:00
Bruce MacDonald
a0c3e989de
deprecate modelfile embed command ( #759 )
2023-10-16 11:07:37 -04:00
Jeffrey Morgan
f9b2f999ac
update readme with docker
setup and link to import.md
2023-10-15 02:23:03 -04:00
Jeffrey Morgan
c416087339
import.md
: formatting and spelling
2023-10-15 01:39:46 -04:00
Jeffrey Morgan
6002cebd2c
import.md
: convert and quantize docs
2023-10-15 00:11:51 -04:00
Jeffrey Morgan
212bdc541c
import.md
: model architectures spelling
2023-10-15 00:07:58 -04:00
Jeffrey Morgan
dca6686273
add steps for creating a Modelfile and more example commands to import.md
2023-10-15 00:05:50 -04:00
Matt Williams
b2974a7095
applied mikes comments
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-10-14 08:29:24 -07:00
Matt Williams
3c975f898f
update doc to refer to docker image
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-10-12 15:57:50 -07:00
Matt Williams
9245c8a1df
add how to quantize doc
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-10-12 15:34:57 -07:00
Bruce MacDonald
274d5a5fdf
optional parameter to not stream response ( #639 )
...
* update streaming request accept header
* add optional stream param to request bodies
2023-10-11 12:54:27 -04:00
Costa Alexoglou
f7f5169c94
Update api.md ( #741 )
...
Avoid triple ticks in visual editor and also copied in clipboard.
2023-10-09 16:01:46 -04:00
James Braza
6f2ce74231
Got rif of all caps to show it can be lower case
2023-10-02 13:54:27 -07:00
James Braza
6edcc5c79f
Using code highlighting syntax around Modelfile
2023-10-02 13:46:05 -07:00
Jiayu Liu
4fc10acce9
add some missing code directives in docs ( #664 )
2023-10-01 11:51:01 -07:00
Jay Nakrani
1d0ebe67e8
Document response stream chunk delimiter. ( #632 )
...
Document response stream chunk delimiter.
2023-09-29 21:45:52 -07:00
Aaron Coffey
6ae33d8141
Update modelfile.md to reflect the usage of num_gpu. ( #629 )
2023-09-28 10:21:21 -04:00
Jeffrey Morgan
c5664c1fef
Update faq.md
2023-09-27 13:49:43 -07:00
Bruce MacDonald
ed20837f9a
Update modelfile.md
2023-09-27 10:38:10 -04:00
James Braza
1db2a61dd0
Added num_predict to the options table ( #614 )
2023-09-27 10:26:08 -04:00
Jeffrey Morgan
5306b0269d
Update linux.md
2023-09-25 16:10:32 -07:00
Jeffrey Morgan
0fb5268496
Update linux.md
2023-09-25 10:06:23 -07:00
Jeffrey Morgan
ee3032ad89
improvements to docs/linux.md
2023-09-24 21:50:07 -07:00
Jeffrey Morgan
5b7a27281d
improvements to docs/linux.md
2023-09-24 21:38:23 -07:00
Jeffrey Morgan
d2a784e33e
add docs/linux.md
2023-09-24 21:34:44 -07:00
Michael Yang
6c6a31a1e8
embed libraries using cmake
2023-09-20 14:41:57 -07:00
Bruce MacDonald
fc6ec356fc
remove libcuda.so
2023-09-20 20:36:14 +01:00
Bruce MacDonald
1255bc9b45
only package 11.8 runner
2023-09-20 20:00:41 +01:00
Bruce MacDonald
4e8be787c7
pack in cuda libs
2023-09-20 17:40:42 +01:00
Bruce MacDonald
2540c9181c
support for packaging in multiple cuda runners ( #509 )
...
* enable packaging multiple cuda versions
* use nvcc cuda version if available
---------
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-14 15:08:13 -04:00
Matt Williams
fc8707686f
Update API docs ( #527 )
...
* Update API docs
Signed-off-by: Matt Williams <m@technovangelist.com>
* strange TOC was getting auto generated
Signed-off-by: Matt Williams <m@technovangelist.com>
* Update docs/api.md
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
* Update docs/api.md
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
* Update docs/api.md
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
* Update api.md
---------
Signed-off-by: Matt Williams <m@technovangelist.com>
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
Co-authored-by: Michael Chiang <mchiang0610@users.noreply.github.com>
2023-09-14 08:51:26 -07:00
Bruce MacDonald
f221637053
first pass at linux gpu support ( #454 )
...
* linux gpu support
* handle multiple gpus
* add cuda docker image (#488 )
---------
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 11:04:35 -04:00
Ackermann Yuriy
154f24af91
Added missing options params to the embeddings docs ( #472 )
2023-09-05 20:18:49 -04:00
Bruce MacDonald
42998d797d
subprocess llama.cpp server ( #401 )
...
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
Quinn Slack
f4432e1dba
treat stop as stop sequences, not exact tokens ( #442 )
...
The `stop` option to the generate API is a list of sequences that should cause generation to stop. Although these are commonly called "stop tokens", they do not necessarily correspond to LLM tokens (per the LLM's tokenizer). For example, if the caller sends a generate request with `"stop":["\n"]`, then generation should stop on any token containing `\n` (and trim `\n` from the output), not just if the token exactly matches `\n`. If `stop` were interpreted strictly as LLM tokens, then it would require callers of the generate API to know the LLM's tokenizer and enumerate many tokens in the `stop` list.
Fixes https://github.com/jmorganca/ollama/issues/295 .
2023-08-30 11:53:42 -04:00
Jeffrey Morgan
d3b838ce60
update orca
to orca-mini
2023-08-27 13:26:30 -04:00
Michael Yang
041f9ad1a1
update README.md
2023-08-25 11:44:25 -07:00
Bruce MacDonald
519f4d98ef
add embed docs for modelfile
2023-08-17 13:37:42 -04:00
Bruce MacDonald
23e1da778d
Add context to api docs
2023-08-15 11:43:22 -03:00
Bruce MacDonald
53bc36d207
Update modelfile.md
2023-08-15 09:23:36 -03:00
Bruce MacDonald
af98a1773f
update python example
2023-08-14 16:38:44 -03:00
Bruce MacDonald
9ae9a89883
Update modelfile.md
2023-08-14 16:26:53 -03:00
Bruce MacDonald
648f0974c6
python example
2023-08-14 15:27:13 -03:00
Bruce MacDonald
fc5230dffa
Add context to api docs
2023-08-14 15:23:24 -03:00
Güvenç Usanmaz
4c33a9ac67
Update langchainpy.md
...
base_url value for Ollama object creation is corrected.
2023-08-14 12:12:56 +03:00
Matt Williams
202c29c21a
resolving bmacd comment
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-11 13:51:44 -07:00
Matt Williams
c1c871620a
Update docs/tutorials/langchainjs.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-08-11 13:48:46 -07:00
Matt Williams
a21a8bef56
Update docs/tutorials/langchainjs.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-08-11 13:48:35 -07:00
Matt Williams
522726228a
Update docs/tutorials.md
...
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-08-11 13:48:16 -07:00
Matt Williams
d3ee1329e9
Add tutorials for using Langchain with ollama
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-10 21:27:37 -07:00
Michael Yang
3a05d3def7
Merge pull request #326 from asarturas/document-num-gqa-parameter
...
Document num_gqa parameter
2023-08-10 18:18:38 -07:00
Arturas Smorgun
d9c2687fd0
document default num_gqa to 1, as it's applicable to most models
...
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-08-11 01:29:40 +01:00
Michael Yang
6517bcc53c
Merge pull request #290 from jmorganca/add-adapter-layers
...
implement loading ggml lora adapters through the modelfile
2023-08-10 17:23:01 -07:00
Arturas Smorgun
c0e7a3b90e
Document num_gqa parameter
...
It is required to be adjusted for some models, see https://github.com/jmorganca/ollama/issues/320 for more context
2023-08-11 00:58:09 +01:00
Jeffrey Morgan
be889b2f81
add docs for /api/embeddings
2023-08-10 15:56:59 -07:00
Jeffrey Morgan
7e26a8df31
cmd: use environment variables for server options
2023-08-10 14:17:53 -07:00
Michael Yang
37c9a8eea9
add lora docs
2023-08-10 09:23:40 -07:00
Bruce MacDonald
43c40c500e
add embed docs for modelfile
2023-08-09 16:14:58 -04:00
Bruce MacDonald
c4861360ec
remove embed docs
2023-08-09 16:14:19 -04:00
Bruce MacDonald
7a5f3616fd
embed text document in modelfile
2023-08-09 10:26:19 -04:00