Matt Williams
13aace3d34
clarify some more
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-07 13:21:54 -07:00
Matt Williams
2b3bb41598
model name format added
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-07 13:17:16 -07:00
cmiller01
93492f1e18
correct precedence of serve params (args over env over default)
2023-08-07 19:55:20 +00:00
Michael Chiang
54ba3e2ceb
langchain JS integration ( #302 )
...
langchain JS integration
2023-08-07 12:21:36 -04:00
Matt Williams
4904cd8bcd
update simpler code samples
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-07 07:40:38 -07:00
Matt Williams
8a45359ec6
Update docs/api.md
...
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
2023-08-07 07:33:05 -07:00
cmiller01
fb593b7bfc
pass flags to serve
to allow setting allowed-origins + host and port
...
* resolves: https://github.com/jmorganca/ollama/issues/300 and
https://github.com/jmorganca/ollama/issues/282
* example usage:
```
ollama serve --port 9999 --allowed-origins "http://foo.example.com,http://192.0.0.1 "
```
2023-08-07 03:34:37 +00:00
Matt Williams
2544b8afa1
update as per Mike's comments
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 17:42:24 -07:00
Matt Williams
ac1b04f271
Update docs/api.md
...
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-08-04 17:40:52 -07:00
Matt Williams
123fdeb919
Update docs/api.md
...
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-08-04 17:38:52 -07:00
Matt Williams
5c82bf95d1
Update docs/api.md
...
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-08-04 17:12:24 -07:00
Matt Williams
38a9b1618c
missed some quotes
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 16:09:07 -07:00
Matt Williams
c18be72a3b
complete 1st draft of api docs
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 16:08:11 -07:00
Matt Williams
a101fe51a7
clean up
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 12:56:41 -07:00
Bruce MacDonald
06fc48ad66
Update README.md ( #285 )
...
Ollama now supports Intel Macs
2023-08-04 15:45:55 -04:00
Matt Williams
d93e2f9210
fleshing out response
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 12:38:58 -07:00
Matt Williams
31edc829fc
continuing
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 12:30:23 -07:00
Matt Williams
b31104768c
filling out generate
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 12:27:47 -07:00
Matt Williams
b662d9fd8c
starting to build out some docs
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 11:55:00 -07:00
Matt Williams
da36196d79
Update the modelfile
...
needed to override the system prompt
from orca and make it easier for a downstream
user to define their system prompt
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-04 08:11:24 -07:00
Michael Yang
b9f4d67554
configurable rope frequency parameters
2023-08-03 22:11:58 -07:00
Matt Williams
42903973b7
Added an example to generate a list of 10 tweets
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-03 17:26:05 -07:00
Matt Williams
8f2df948ab
Create a sentiments example
...
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-08-03 16:38:31 -07:00
Jeffrey Morgan
e3fb1fd3f1
server: compare options correctly
2023-08-03 15:55:40 -04:00
Michael Yang
29b897f525
Merge pull request #253 from jmorganca/upload
...
use a pipe to push to registry with progress
2023-08-03 12:11:23 -07:00
Michael Yang
85aeb42869
Merge pull request #270 from jmorganca/update-llama-cpp
...
update llama.cpp
2023-08-03 12:09:00 -07:00
Michael Yang
c5bcf32823
update llama.cpp
2023-08-03 11:50:24 -07:00
Michael Yang
a71ff3f6a2
use a pipe to push to registry with progress
...
switch to a monolithic upload instead of a chunk upload through a pipe
to report progress
2023-08-03 10:37:13 -07:00
Michael Chiang
f0b365a478
Merge pull request #268 from jmorganca/mchiang0610-patch-2
...
Update README.md
2023-08-03 11:23:31 -04:00
Michael Chiang
df8048fecd
Update README.md
2023-08-03 11:22:57 -04:00
Michael Yang
da2459d519
Update README.md ( #265 )
2023-08-02 22:38:32 -04:00
Bruce MacDonald
bd6d741d87
tell users to check the server error logs
2023-08-02 17:08:11 -04:00
Bruce MacDonald
8b1e791820
allow specifying zero values in modelfile
2023-08-02 17:07:53 -04:00
Jeffrey Morgan
03cff3a225
server: reset digest at end of generate
2023-08-02 16:15:44 -04:00
Michael Yang
cc509a994e
Merge pull request #260 from jmorganca/embed-ggml-metal
...
override ggml-metal if the file is different
2023-08-02 13:01:46 -07:00
Michael Yang
0e79e52ddd
override ggml-metal if the file is different
2023-08-02 12:50:30 -07:00
Jeffrey Morgan
6fbb380076
hide dock icon if window closes
2023-08-02 11:05:34 -04:00
Bruce MacDonald
8f8b6288ac
check server is running before running command
2023-08-02 10:51:23 -04:00
Michael Yang
b98096389d
Merge pull request #255 from jmorganca/update-llama-cpp
...
Update llama cpp
2023-08-01 17:18:33 -07:00
Michael Yang
74a5f7e698
no gpu for 70B model
2023-08-01 17:12:50 -07:00
Michael Yang
7a1c3e62dc
update llama.cpp
2023-08-01 16:54:01 -07:00
Jeffrey Morgan
da52f5bfdd
run npm install
on build
2023-08-01 17:41:25 -04:00
Bruce MacDonald
50e87c6691
read from os executable
2023-08-01 16:01:55 -04:00
Gerd
e4a970ece1
Add model update to README.md ( #252 )
2023-08-01 15:06:33 -04:00
Jeffrey Morgan
4ca43a694c
remove newlines between list items in README.md
2023-08-01 15:05:39 -04:00
Bruce MacDonald
765994362c
use head to check heartbeat
2023-08-01 14:50:38 -04:00
Bruce MacDonald
40a25bf8c3
pr comments
2023-08-01 13:48:48 -04:00
Bruce MacDonald
1c5a8770ee
read runner parameter options from map
...
- read runner options from map to see what was specified explicitly and overwrite zero values
2023-08-01 13:38:19 -04:00
Bruce MacDonald
daa0d1de7a
allow specifying zero values in modelfile
2023-08-01 13:37:50 -04:00
Jeffrey Morgan
58daeb962a
add llama2-uncensored
to model list
2023-08-01 11:25:01 -04:00