Andrei Betlen
|
d696251fbe
|
Fix logits_all bug
|
2023-09-30 16:02:35 -04:00 |
|
Andrei Betlen
|
42bb721d64
|
Fix bug in embedding
|
2023-09-30 13:20:22 -04:00 |
|
Andrei
|
3bca7708fb
|
Configurable Chat Formats (#711)
* Add configurable default chat completion format.
* Remove chat_template file to avoid circular import
* Update llama_types
* Add chat format
|
2023-09-29 19:52:04 -04:00 |
|
Josh XT
|
a945404b4a
|
Fix rope scaling defaults (#767)
* Fix rope scale with backwards compatibility
* Fix defaults
* Fix op
* Remove backwards compatibility
* Check single val
|
2023-09-29 16:03:57 -04:00 |
|
Andrei Betlen
|
1a1c3dc418
|
Update llama.cpp
|
2023-09-28 22:42:03 -04:00 |
|
Andrei Betlen
|
38e34c97f0
|
Update llama.cpp
|
2023-09-18 16:11:27 -04:00 |
|
Andrei Betlen
|
f4090a0bb2
|
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
|
2023-09-13 23:00:43 -04:00 |
|
Andrei Betlen
|
6a20293fc2
|
Reorder init params to match llama.cpp order
|
2023-09-13 21:20:26 -04:00 |
|
Andrei Betlen
|
c8f9b8a734
|
Explicitly make all init params other than model_path into keyword only params
|
2023-09-13 21:19:47 -04:00 |
|
Andrei Betlen
|
a68f9e2791
|
Add kwargs to init to catch extra params
|
2023-09-13 21:19:02 -04:00 |
|
Andrei Betlen
|
9e345a47a2
|
remove print
|
2023-09-13 21:12:27 -04:00 |
|
Andrei Betlen
|
517f9ed80b
|
Convert missed llama.cpp constants into standard python types
|
2023-09-13 21:11:52 -04:00 |
|
Andrei Betlen
|
c4c440ba2d
|
Fix tensor_split cli option
|
2023-09-13 20:00:42 -04:00 |
|
Andrei Betlen
|
1910793f56
|
Merge branch 'main' into v0.2-wip
|
2023-09-12 16:43:32 -04:00 |
|
Andrei Betlen
|
3f76e1de52
|
cjk pr minor cleanup
|
2023-08-29 07:21:59 -04:00 |
|
Andrei
|
bae44ec8bf
|
Merge pull request #309 from MeouSker77/fix-CJK
Fix CJK and emoji stream output
|
2023-08-29 06:58:10 -04:00 |
|
Andrei Betlen
|
4887973c22
|
Update llama.cpp
|
2023-08-27 12:59:20 -04:00 |
|
Andrei Betlen
|
3a29d65f45
|
Update llama.cpp
|
2023-08-26 23:36:24 -04:00 |
|
Andrei Betlen
|
ac47d55577
|
Merge branch 'main' into v0.2-wip
|
2023-08-25 15:45:22 -04:00 |
|
Andrei Betlen
|
48cf43b427
|
Use _with_model variants for tokenization
|
2023-08-25 13:43:16 -04:00 |
|
Andrei Betlen
|
8ac59465b9
|
Strip leading space when de-tokenizing.
|
2023-08-25 04:56:48 -04:00 |
|
Andrei Betlen
|
4ed632c4b3
|
Remove deprecated params
|
2023-08-24 01:01:05 -04:00 |
|
Andrei Betlen
|
cf405f6764
|
Merge branch 'main' into v0.2-wip
|
2023-08-24 00:30:51 -04:00 |
|
Andrei Betlen
|
bbbf0f4fc4
|
Update llama.cpp
|
2023-08-24 00:17:00 -04:00 |
|
Andrei Betlen
|
620cd2fd69
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-14 22:41:47 -04:00 |
|
Andrei Betlen
|
5788f1f2b2
|
Remove unnused import
|
2023-08-14 22:41:37 -04:00 |
|
Billy Cao
|
c471871d0b
|
make n_gpu_layers=-1 offload all layers
|
2023-08-13 11:21:28 +08:00 |
|
Billy Cao
|
d018c7b01d
|
Add doc string for n_gpu_layers argument
|
2023-08-12 18:41:47 +08:00 |
|
MeouSker77
|
88184ed217
|
fix CJK output again
|
2023-08-09 22:04:35 +08:00 |
|
Andrei Betlen
|
66fb0345e8
|
Move grammar to function call argument
|
2023-08-08 15:08:54 -04:00 |
|
Andrei Betlen
|
1e844d3238
|
fix
|
2023-08-08 15:07:28 -04:00 |
|
Andrei Betlen
|
843b7ccd90
|
Merge branch 'main' into c0sogi/main
|
2023-08-08 14:43:02 -04:00 |
|
Andrei Betlen
|
d015bdb4f8
|
Add mul_mat_q option
|
2023-08-08 14:35:06 -04:00 |
|
c0sogi
|
b07713cb9f
|
reset grammar for every generation
|
2023-08-07 15:16:25 +09:00 |
|
c0sogi
|
418aa83b01
|
Added grammar based sampling
|
2023-08-07 02:21:37 +09:00 |
|
Andrei Betlen
|
ce57920e60
|
Suppress llama.cpp output when loading model.
|
2023-07-28 14:45:18 -04:00 |
|
Andrei Betlen
|
a9b9f0397c
|
Format
|
2023-07-28 01:53:08 -04:00 |
|
Andrei Betlen
|
abc538fcd5
|
fix: annoying bug where attribute exceptions were droining out file not found exceptions
|
2023-07-28 01:43:00 -04:00 |
|
Shouyi Wang
|
426dbfe3f4
|
Change tensor_split from array to pointer
|
2023-07-25 18:29:59 +10:00 |
|
Andrei Betlen
|
343480364f
|
Merge branch 'main' into v0.2-wip
|
2023-07-24 15:26:08 -04:00 |
|
Andrei Betlen
|
11dd2bf382
|
Add temporary rms_norm_eps parameter
|
2023-07-24 14:09:24 -04:00 |
|
Andrei Betlen
|
8cd64d4ac3
|
Add rms_eps_norm
|
2023-07-24 13:52:12 -04:00 |
|
bretello
|
0f09f10e8c
|
add support for llama2 70b
|
2023-07-24 19:38:24 +02:00 |
|
Andrei Betlen
|
0538ba1dab
|
Merge branch 'main' into v0.2-wip
|
2023-07-20 19:06:26 -04:00 |
|
Andrei
|
365d9a4367
|
Merge pull request #481 from c0sogi/main
Added `RouteErrorHandler` for server
|
2023-07-20 17:41:42 -04:00 |
|
Carlos Tejada
|
0756a2d3fb
|
Now the last token sent when stream=True
|
2023-07-19 22:47:14 -04:00 |
|
Andrei Betlen
|
b43917c144
|
Add functions parameters
|
2023-07-19 03:48:20 -04:00 |
|
Andrei Betlen
|
19ba9d3845
|
Use numpy arrays for logits_processors and stopping_criteria. Closes #491
|
2023-07-18 19:27:41 -04:00 |
|
c0sogi
|
1551ba10bd
|
Added RouteErrorHandler for server
|
2023-07-16 14:57:39 +09:00 |
|
Andrei Betlen
|
8ab098e49d
|
Re-order Llama class params
|
2023-07-15 15:35:08 -04:00 |
|