Andrei Betlen
|
fab4bccc35
|
Bump version
|
2023-09-30 16:04:46 -04:00 |
|
Andrei Betlen
|
d696251fbe
|
Fix logits_all bug
|
2023-09-30 16:02:35 -04:00 |
|
Andrei Betlen
|
6ee413d79e
|
Bump version
|
2023-09-30 13:23:09 -04:00 |
|
Andrei Betlen
|
42bb721d64
|
Fix bug in embedding
|
2023-09-30 13:20:22 -04:00 |
|
Andrei Betlen
|
5d62d55a82
|
Bump version
|
2023-09-30 00:07:06 -04:00 |
|
Andrei Betlen
|
386c88b68e
|
Bump version
|
2023-09-29 20:07:31 -04:00 |
|
Andrei Betlen
|
d9bce17794
|
Update server params
|
2023-09-29 19:59:12 -04:00 |
|
Andrei Betlen
|
3720c739d4
|
Update llama.cpp
|
2023-09-29 19:58:21 -04:00 |
|
Andrei
|
3bca7708fb
|
Configurable Chat Formats (#711)
* Add configurable default chat completion format.
* Remove chat_template file to avoid circular import
* Update llama_types
* Add chat format
|
2023-09-29 19:52:04 -04:00 |
|
Josh XT
|
a945404b4a
|
Fix rope scaling defaults (#767)
* Fix rope scale with backwards compatibility
* Fix defaults
* Fix op
* Remove backwards compatibility
* Check single val
|
2023-09-29 16:03:57 -04:00 |
|
Andrei Betlen
|
1a1c3dc418
|
Update llama.cpp
|
2023-09-28 22:42:03 -04:00 |
|
Andrei Betlen
|
4177ae6d34
|
Bump version
|
2023-09-25 14:38:38 -04:00 |
|
Viacheslav/Slava Tradunsky
|
3d5e5b1c04
|
Adds openai-processing-ms response header (#748)
|
2023-09-25 13:55:58 -04:00 |
|
Andrei Betlen
|
dbca136fea
|
Update llama_types and names to match openai api
|
2023-09-20 15:38:26 -04:00 |
|
Andrei Betlen
|
38e34c97f0
|
Update llama.cpp
|
2023-09-18 16:11:27 -04:00 |
|
Andrei Betlen
|
8d75016549
|
Install required runtime dlls to package directory on windows
|
2023-09-16 14:57:49 -04:00 |
|
Andrei Betlen
|
acf18fcdf0
|
Bump version
|
2023-09-15 14:22:21 -04:00 |
|
Andrei Betlen
|
b047b3034e
|
Remove confusing helpstring from server cli args. Closes #719
|
2023-09-15 14:09:43 -04:00 |
|
Andrei Betlen
|
24fec0b242
|
Bump version
|
2023-09-14 18:33:08 -04:00 |
|
Andrei Betlen
|
8474665625
|
Update base_path to fix issue resolving dll in windows isolation container.
|
2023-09-14 14:51:43 -04:00 |
|
Andrei Betlen
|
507bcc7171
|
Bump version
|
2023-09-13 23:15:23 -04:00 |
|
Andrei Betlen
|
0449d29b9f
|
Fix boolean env vars and cli arguments
|
2023-09-13 23:09:57 -04:00 |
|
earonesty
|
58a6e42cc0
|
Update app.py (#705)
|
2023-09-13 23:01:34 -04:00 |
|
Andrei Betlen
|
f4090a0bb2
|
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
|
2023-09-13 23:00:43 -04:00 |
|
Andrei Betlen
|
c999325e8e
|
Fix boolean cli flags
|
2023-09-13 22:56:10 -04:00 |
|
Andrei Betlen
|
4daf77e546
|
Format
|
2023-09-13 21:23:23 -04:00 |
|
Andrei Betlen
|
2920c4bf7e
|
Update server params. Added lora_base, lora_path, low_vram, and main_gpu. Removed rms_norm_eps and n_gqa (deprecated in llama.cpp)
|
2023-09-13 21:23:13 -04:00 |
|
Andrei Betlen
|
6a20293fc2
|
Reorder init params to match llama.cpp order
|
2023-09-13 21:20:26 -04:00 |
|
Andrei Betlen
|
c8f9b8a734
|
Explicitly make all init params other than model_path into keyword only params
|
2023-09-13 21:19:47 -04:00 |
|
Andrei Betlen
|
a68f9e2791
|
Add kwargs to init to catch extra params
|
2023-09-13 21:19:02 -04:00 |
|
Andrei Betlen
|
9e345a47a2
|
remove print
|
2023-09-13 21:12:27 -04:00 |
|
Andrei Betlen
|
517f9ed80b
|
Convert missed llama.cpp constants into standard python types
|
2023-09-13 21:11:52 -04:00 |
|
Andrei Betlen
|
c4c440ba2d
|
Fix tensor_split cli option
|
2023-09-13 20:00:42 -04:00 |
|
Andrei Betlen
|
203ede4ba2
|
Bump version
|
2023-09-13 18:07:08 -04:00 |
|
Andrei Betlen
|
759405c84b
|
Fix issue with Literal and Optional cli arguments not working. Closes #702
|
2023-09-13 18:06:12 -04:00 |
|
Devrim
|
da9df78db0
|
Add X-Request-ID request header for mirroring custom IDs. (#703)
|
2023-09-13 16:18:31 -04:00 |
|
Andrei Betlen
|
8e13520796
|
Bump version
|
2023-09-13 01:47:58 -04:00 |
|
Andrei Betlen
|
2787663a25
|
Bump version
|
2023-09-12 21:00:01 -04:00 |
|
Andrei Betlen
|
6e89775759
|
Bump version
|
2023-09-12 18:57:01 -04:00 |
|
Andrei Betlen
|
bb4e67e7aa
|
Using dynamic version
|
2023-09-12 18:56:36 -04:00 |
|
Andrei Betlen
|
1910793f56
|
Merge branch 'main' into v0.2-wip
|
2023-09-12 16:43:32 -04:00 |
|
Andrei Betlen
|
c7901f1141
|
Bump version
|
2023-09-12 16:16:40 -04:00 |
|
janvdp
|
33ce931cce
|
merge upstream
|
2023-09-09 21:21:04 +02:00 |
|
Andrei Betlen
|
d3f63211ef
|
Update llama.cpp
|
2023-09-09 12:12:32 -04:00 |
|
janvdp
|
da0fdafc32
|
import version in __init__.py
|
2023-09-05 21:09:28 +02:00 |
|
janvdp
|
6e8e64d09a
|
add version file
|
2023-09-05 21:09:08 +02:00 |
|
Andrei Betlen
|
186626d58e
|
Update llama.cpp
|
2023-09-01 14:26:13 -04:00 |
|
Andrei Betlen
|
47de3ab104
|
Update llama.cpp
|
2023-08-29 07:36:20 -04:00 |
|
Andrei Betlen
|
3f76e1de52
|
cjk pr minor cleanup
|
2023-08-29 07:21:59 -04:00 |
|
Andrei
|
bae44ec8bf
|
Merge pull request #309 from MeouSker77/fix-CJK
Fix CJK and emoji stream output
|
2023-08-29 06:58:10 -04:00 |
|
Andrei Betlen
|
e0dcbc28a1
|
Update llama.cpp
|
2023-08-28 10:33:45 -04:00 |
|
Andrei Betlen
|
4887973c22
|
Update llama.cpp
|
2023-08-27 12:59:20 -04:00 |
|
Andrei Betlen
|
3a29d65f45
|
Update llama.cpp
|
2023-08-26 23:36:24 -04:00 |
|
Andrei Betlen
|
5de8009706
|
Add copilot-codex completions endpoint for drop-in copilot usage
|
2023-08-25 17:49:14 -04:00 |
|
Andrei Betlen
|
ac47d55577
|
Merge branch 'main' into v0.2-wip
|
2023-08-25 15:45:22 -04:00 |
|
Andrei Betlen
|
ef23d1e545
|
Update llama.cpp
|
2023-08-25 14:35:53 -04:00 |
|
Andrei Betlen
|
48cf43b427
|
Use _with_model variants for tokenization
|
2023-08-25 13:43:16 -04:00 |
|
Andrei Betlen
|
8ac59465b9
|
Strip leading space when de-tokenizing.
|
2023-08-25 04:56:48 -04:00 |
|
Andrei Betlen
|
c2d1deaa8a
|
Update llama.cpp
|
2023-08-24 18:01:42 -04:00 |
|
Andrei Betlen
|
db982a861f
|
Fix
|
2023-08-24 01:01:12 -04:00 |
|
Andrei Betlen
|
4ed632c4b3
|
Remove deprecated params
|
2023-08-24 01:01:05 -04:00 |
|
Andrei Betlen
|
cf405f6764
|
Merge branch 'main' into v0.2-wip
|
2023-08-24 00:30:51 -04:00 |
|
Andrei Betlen
|
bbbf0f4fc4
|
Update llama.cpp
|
2023-08-24 00:17:00 -04:00 |
|
Andrei Betlen
|
e632c59fa0
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-17 20:53:04 -04:00 |
|
c0sogi
|
a240aa6b25
|
Fix typos in llama_grammar
|
2023-08-17 21:00:44 +09:00 |
|
Andrei Betlen
|
620cd2fd69
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-14 22:41:47 -04:00 |
|
Andrei Betlen
|
5788f1f2b2
|
Remove unnused import
|
2023-08-14 22:41:37 -04:00 |
|
Andrei
|
6dfb98117e
|
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
|
2023-08-14 22:40:41 -04:00 |
|
Andrei
|
b99e758045
|
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
|
2023-08-14 22:40:10 -04:00 |
|
Andrei Betlen
|
b345d60987
|
Update llama.cpp
|
2023-08-14 22:33:30 -04:00 |
|
Billy Cao
|
c471871d0b
|
make n_gpu_layers=-1 offload all layers
|
2023-08-13 11:21:28 +08:00 |
|
Billy Cao
|
d018c7b01d
|
Add doc string for n_gpu_layers argument
|
2023-08-12 18:41:47 +08:00 |
|
Hannes Krumbiegel
|
17dd7fa8e0
|
Add py.typed
|
2023-08-11 09:58:48 +02:00 |
|
MeouSker77
|
88184ed217
|
fix CJK output again
|
2023-08-09 22:04:35 +08:00 |
|
Andrei Betlen
|
66fb0345e8
|
Move grammar to function call argument
|
2023-08-08 15:08:54 -04:00 |
|
Andrei Betlen
|
1e844d3238
|
fix
|
2023-08-08 15:07:28 -04:00 |
|
Andrei Betlen
|
843b7ccd90
|
Merge branch 'main' into c0sogi/main
|
2023-08-08 14:43:02 -04:00 |
|
Andrei Betlen
|
d015bdb4f8
|
Add mul_mat_q option
|
2023-08-08 14:35:06 -04:00 |
|
Andrei Betlen
|
f6a7850e1a
|
Update llama.cpp
|
2023-08-08 14:30:58 -04:00 |
|
c0sogi
|
0d7d2031a9
|
prevent memory access error by llama_grammar_free
|
2023-08-07 17:02:33 +09:00 |
|
c0sogi
|
b07713cb9f
|
reset grammar for every generation
|
2023-08-07 15:16:25 +09:00 |
|
c0sogi
|
418aa83b01
|
Added grammar based sampling
|
2023-08-07 02:21:37 +09:00 |
|
c0sogi
|
ac188a21f3
|
Added low level grammar API
|
2023-08-05 14:43:35 +09:00 |
|
Andrei Betlen
|
ce57920e60
|
Suppress llama.cpp output when loading model.
|
2023-07-28 14:45:18 -04:00 |
|
Andrei Betlen
|
a9b9f0397c
|
Format
|
2023-07-28 01:53:08 -04:00 |
|
Andrei Betlen
|
abc538fcd5
|
fix: annoying bug where attribute exceptions were droining out file not found exceptions
|
2023-07-28 01:43:00 -04:00 |
|
Shouyi Wang
|
426dbfe3f4
|
Change tensor_split from array to pointer
|
2023-07-25 18:29:59 +10:00 |
|
Andrei Betlen
|
078902a6fe
|
Add llama_grammar_accept_token
|
2023-07-24 15:55:26 -04:00 |
|
Andrei Betlen
|
bf901773b0
|
Add llama_sample_grammar
|
2023-07-24 15:42:31 -04:00 |
|
Andrei Betlen
|
1b6997d69f
|
Convert constants to python types and allow python types in low-level api
|
2023-07-24 15:42:07 -04:00 |
|
Andrei Betlen
|
343480364f
|
Merge branch 'main' into v0.2-wip
|
2023-07-24 15:26:08 -04:00 |
|
Andrei Betlen
|
11dd2bf382
|
Add temporary rms_norm_eps parameter
|
2023-07-24 14:09:24 -04:00 |
|
Andrei Betlen
|
8cd64d4ac3
|
Add rms_eps_norm
|
2023-07-24 13:52:12 -04:00 |
|
bretello
|
0f09f10e8c
|
add support for llama2 70b
|
2023-07-24 19:38:24 +02:00 |
|
Andrei Betlen
|
77c9f496b0
|
Merge branch 'main' into v0.2-wip
|
2023-07-24 13:19:54 -04:00 |
|
Andrei Betlen
|
401309d11c
|
Revert "Merge pull request #521 from bretello/main"
This reverts commit 07f0f3a386 , reversing
changes made to d8a3ddbb1c .
|
2023-07-24 13:11:10 -04:00 |
|
Andrei
|
07f0f3a386
|
Merge pull request #521 from bretello/main
raise exception when `llama_load_model_from_file` fails
|
2023-07-24 13:09:28 -04:00 |
|
Andrei Betlen
|
d8a3ddbb1c
|
Update llama.cpp
|
2023-07-24 13:08:06 -04:00 |
|
Andrei Betlen
|
985d559971
|
Update llama.cpp
|
2023-07-24 13:04:34 -04:00 |
|
bretello
|
8be7d67f7e
|
raise exception when llama_load_model_from_file fails
|
2023-07-24 14:42:37 +02:00 |
|