Commit graph

1482 commits

Author SHA1 Message Date
Andrei Betlen
39e5feb138 Fix quote issue 2023-09-29 23:01:38 -04:00
Andrei Betlen
3c6e98f945 Use dev versioning for test pypi 2023-09-29 22:57:49 -04:00
Andrei Betlen
1cca20304b Revert update to publish test pypi 2023-09-29 22:48:17 -04:00
Andrei Betlen
85e4d08a2e Update publish to test pypi workflow 2023-09-29 22:32:31 -04:00
Andrei Betlen
43f8fc371a Potential fix for pip install bug 2023-09-29 22:24:22 -04:00
Andrei Betlen
386c88b68e Bump version 2023-09-29 20:07:31 -04:00
Andrei Betlen
d9bce17794 Update server params 2023-09-29 19:59:12 -04:00
Andrei Betlen
3720c739d4 Update llama.cpp 2023-09-29 19:58:21 -04:00
Andrei
3bca7708fb
Configurable Chat Formats (#711)
* Add configurable default chat completion format.

* Remove chat_template file to avoid circular import

* Update llama_types

* Add chat format
2023-09-29 19:52:04 -04:00
Josh XT
a945404b4a
Fix rope scaling defaults (#767)
* Fix rope scale with backwards compatibility

* Fix defaults

* Fix op

* Remove backwards compatibility

* Check single val
2023-09-29 16:03:57 -04:00
Andrei Betlen
a72efc77de Update llama.cpp 2023-09-28 23:25:14 -04:00
Andrei Betlen
1a1c3dc418 Update llama.cpp 2023-09-28 22:42:03 -04:00
Andrei Betlen
4177ae6d34 Bump version 2023-09-25 14:38:38 -04:00
Andrei Betlen
1ed0f3ebe1 Bump scikit-build-core version to one that includes fix for windows cmake. 2023-09-25 14:20:09 -04:00
Andrei Betlen
f7b785a00f Update CHANGELOG 2023-09-25 13:58:23 -04:00
Andrei Betlen
cf8ae5a69c Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-09-25 13:57:00 -04:00
Andrei Betlen
5da57734bc Update llama.cpp 2023-09-25 13:56:52 -04:00
Viacheslav/Slava Tradunsky
3d5e5b1c04
Adds openai-processing-ms response header (#748) 2023-09-25 13:55:58 -04:00
Andrei Betlen
dbca136fea Update llama_types and names to match openai api 2023-09-20 15:38:26 -04:00
Andrei Betlen
15000fca69 Update llama.cpp 2023-09-20 14:38:44 -04:00
Andrei Betlen
0b2464c32b Ignore version if set by pyenv 2023-09-20 12:28:28 -04:00
Andrei Betlen
3afbf2eb75 Update CHANGELOG 2023-09-18 16:20:56 -04:00
Andrei Betlen
6e167a285e Update CHANGELOG 2023-09-18 16:11:34 -04:00
Andrei Betlen
38e34c97f0 Update llama.cpp 2023-09-18 16:11:27 -04:00
Andrei Betlen
8d75016549 Install required runtime dlls to package directory on windows 2023-09-16 14:57:49 -04:00
Andrei Betlen
acf18fcdf0 Bump version 2023-09-15 14:22:21 -04:00
Andrei Betlen
c7f45a7468 Update llama.cpp 2023-09-15 14:16:34 -04:00
Andrei Betlen
b047b3034e Remove confusing helpstring from server cli args. Closes #719 2023-09-15 14:09:43 -04:00
Andrei Betlen
24fec0b242 Bump version 2023-09-14 18:33:08 -04:00
Andrei Betlen
dbd3a6d1ed Fix issue installing on m1 macs 2023-09-14 18:25:44 -04:00
Andrei Betlen
482ecd79c9 Revert "Update llama.cpp"
This reverts commit f73e385c33.
2023-09-14 17:03:18 -04:00
Andrei Betlen
f73e385c33 Update llama.cpp 2023-09-14 16:37:33 -04:00
Andrei Betlen
ca4eb952a6 Revert "Update llama.cpp"
This reverts commit aa2f8a5008.
2023-09-14 15:28:50 -04:00
Andrei Betlen
7da8e0fbf1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-09-14 14:51:50 -04:00
Andrei Betlen
8474665625 Update base_path to fix issue resolving dll in windows isolation container. 2023-09-14 14:51:43 -04:00
Jason Cox
40b22909dc
Update examples from ggml to gguf and add hw-accel note for Web Server (#688)
* Examples from ggml to gguf

* Use gguf file extension

Update examples to use filenames with gguf extension (e.g. llama-model.gguf).

---------

Co-authored-by: Andrei <abetlen@gmail.com>
2023-09-14 14:48:21 -04:00
Andrei Betlen
aa2f8a5008 Update llama.cpp 2023-09-14 14:44:59 -04:00
Andrei Betlen
2291798900 Fix dockerfiles to install starlette-context 2023-09-14 14:40:16 -04:00
Andrei Betlen
65a2a20050 Enable make fallback for scikit-build-core 2023-09-14 11:43:55 -04:00
Andrei Betlen
255d653ae3 Add documentation and changelog links in pyproject 2023-09-14 04:00:37 -04:00
Andrei Betlen
95d54808a5 Upgrade pip for editable installs 2023-09-14 02:01:45 -04:00
Andrei Betlen
507bcc7171 Bump version 2023-09-13 23:15:23 -04:00
Andrei Betlen
3e2250a12e Update CHANGELOG 2023-09-13 23:14:22 -04:00
Andrei Betlen
60119dbaeb Update CHANGELOG 2023-09-13 23:13:19 -04:00
Andrei Betlen
0449d29b9f Fix boolean env vars and cli arguments 2023-09-13 23:09:57 -04:00
earonesty
58a6e42cc0
Update app.py (#705) 2023-09-13 23:01:34 -04:00
Andrei Betlen
f4090a0bb2 Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs. 2023-09-13 23:00:43 -04:00
Andrei Betlen
c999325e8e Fix boolean cli flags 2023-09-13 22:56:10 -04:00
Andrei Betlen
83764c5aee Update CHANGELOG 2023-09-13 21:58:53 -04:00
Andrei Betlen
4daf77e546 Format 2023-09-13 21:23:23 -04:00