Andrei
ab028cb878
Migrate inference to llama_batch and llama_decode api ( #795 )
...
* Add low-level batching notebook
* fix: tokenization of special characters: (#850 )
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
* Update CHANGELOG
* Cleanup
* Fix runner label
* Update notebook
* Use llama_decode and batch api
* Support logits_all parameter
---------
Co-authored-by: Antoine Lizee <antoine.lizee@gmail.com>
2023-11-02 20:13:57 -04:00
Andrei Betlen
fa83cc5f9c
Update llama.cpp
...
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
Update llama.cpp
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
2023-11-02 14:28:15 -04:00
Antoine Lizee
4d4e0f11e2
fix: tokenization of special characters: ( #850 )
...
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
2023-11-02 14:28:14 -04:00
cebtenzzre
eefd76fe81
llama: fix exception in Llama.__del__ ( #846 )
2023-11-01 18:53:57 -04:00
Marko Tasic
9c8f4dca5f
fixed Llama._create_completion suffix check, it can be either None or str instance ( #854 )
2023-11-01 18:52:50 -04:00
Andrei Betlen
53861c9e53
Update llama.cpp
2023-10-24 03:13:32 -04:00
gmcgoldr
09a8406c83
Fix streaming doesn't return finish reason ( #798 )
...
When streaming the yield that contains the finish can be skipped. This change ensures that yield isn't skipped.
2023-10-19 02:55:56 -04:00
Andrei Betlen
ff580031d2
Update llama.cpp
2023-10-19 02:55:08 -04:00
Pierre Alexandre SCHEMBRI
10304d75fc
Make use of suppress_stdout_stderr when freeing model ( #803 )
2023-10-15 13:52:43 -04:00
Eric Liu
b50166500e
Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES ( #820 )
...
* Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES
* reword
2023-10-15 13:51:51 -04:00
Andrei Betlen
d696251fbe
Fix logits_all bug
2023-09-30 16:02:35 -04:00
Andrei Betlen
42bb721d64
Fix bug in embedding
2023-09-30 13:20:22 -04:00
Andrei
3bca7708fb
Configurable Chat Formats ( #711 )
...
* Add configurable default chat completion format.
* Remove chat_template file to avoid circular import
* Update llama_types
* Add chat format
2023-09-29 19:52:04 -04:00
Josh XT
a945404b4a
Fix rope scaling defaults ( #767 )
...
* Fix rope scale with backwards compatibility
* Fix defaults
* Fix op
* Remove backwards compatibility
* Check single val
2023-09-29 16:03:57 -04:00
Andrei Betlen
1a1c3dc418
Update llama.cpp
2023-09-28 22:42:03 -04:00
Andrei Betlen
38e34c97f0
Update llama.cpp
2023-09-18 16:11:27 -04:00
Andrei Betlen
f4090a0bb2
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
2023-09-13 23:00:43 -04:00
Andrei Betlen
6a20293fc2
Reorder init params to match llama.cpp order
2023-09-13 21:20:26 -04:00
Andrei Betlen
c8f9b8a734
Explicitly make all init params other than model_path into keyword only params
2023-09-13 21:19:47 -04:00
Andrei Betlen
a68f9e2791
Add kwargs to init to catch extra params
2023-09-13 21:19:02 -04:00
Andrei Betlen
9e345a47a2
remove print
2023-09-13 21:12:27 -04:00
Andrei Betlen
517f9ed80b
Convert missed llama.cpp constants into standard python types
2023-09-13 21:11:52 -04:00
Andrei Betlen
c4c440ba2d
Fix tensor_split cli option
2023-09-13 20:00:42 -04:00
Andrei Betlen
1910793f56
Merge branch 'main' into v0.2-wip
2023-09-12 16:43:32 -04:00
Andrei Betlen
3f76e1de52
cjk pr minor cleanup
2023-08-29 07:21:59 -04:00
Andrei
bae44ec8bf
Merge pull request #309 from MeouSker77/fix-CJK
...
Fix CJK and emoji stream output
2023-08-29 06:58:10 -04:00
Andrei Betlen
4887973c22
Update llama.cpp
2023-08-27 12:59:20 -04:00
Andrei Betlen
3a29d65f45
Update llama.cpp
2023-08-26 23:36:24 -04:00
Andrei Betlen
ac47d55577
Merge branch 'main' into v0.2-wip
2023-08-25 15:45:22 -04:00
Andrei Betlen
48cf43b427
Use _with_model variants for tokenization
2023-08-25 13:43:16 -04:00
Andrei Betlen
8ac59465b9
Strip leading space when de-tokenizing.
2023-08-25 04:56:48 -04:00
Andrei Betlen
4ed632c4b3
Remove deprecated params
2023-08-24 01:01:05 -04:00
Andrei Betlen
cf405f6764
Merge branch 'main' into v0.2-wip
2023-08-24 00:30:51 -04:00
Andrei Betlen
bbbf0f4fc4
Update llama.cpp
2023-08-24 00:17:00 -04:00
Andrei Betlen
620cd2fd69
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-08-14 22:41:47 -04:00
Andrei Betlen
5788f1f2b2
Remove unnused import
2023-08-14 22:41:37 -04:00
Billy Cao
c471871d0b
make n_gpu_layers=-1 offload all layers
2023-08-13 11:21:28 +08:00
Billy Cao
d018c7b01d
Add doc string for n_gpu_layers argument
2023-08-12 18:41:47 +08:00
MeouSker77
88184ed217
fix CJK output again
2023-08-09 22:04:35 +08:00
Andrei Betlen
66fb0345e8
Move grammar to function call argument
2023-08-08 15:08:54 -04:00
Andrei Betlen
1e844d3238
fix
2023-08-08 15:07:28 -04:00
Andrei Betlen
843b7ccd90
Merge branch 'main' into c0sogi/main
2023-08-08 14:43:02 -04:00
Andrei Betlen
d015bdb4f8
Add mul_mat_q option
2023-08-08 14:35:06 -04:00
c0sogi
b07713cb9f
reset grammar for every generation
2023-08-07 15:16:25 +09:00
c0sogi
418aa83b01
Added grammar based sampling
2023-08-07 02:21:37 +09:00
Andrei Betlen
ce57920e60
Suppress llama.cpp output when loading model.
2023-07-28 14:45:18 -04:00
Andrei Betlen
a9b9f0397c
Format
2023-07-28 01:53:08 -04:00
Andrei Betlen
abc538fcd5
fix: annoying bug where attribute exceptions were droining out file not found exceptions
2023-07-28 01:43:00 -04:00
Shouyi Wang
426dbfe3f4
Change tensor_split from array to pointer
2023-07-25 18:29:59 +10:00
Andrei Betlen
343480364f
Merge branch 'main' into v0.2-wip
2023-07-24 15:26:08 -04:00
Andrei Betlen
11dd2bf382
Add temporary rms_norm_eps parameter
2023-07-24 14:09:24 -04:00
Andrei Betlen
8cd64d4ac3
Add rms_eps_norm
2023-07-24 13:52:12 -04:00
bretello
0f09f10e8c
add support for llama2 70b
2023-07-24 19:38:24 +02:00
Andrei Betlen
0538ba1dab
Merge branch 'main' into v0.2-wip
2023-07-20 19:06:26 -04:00
Andrei
365d9a4367
Merge pull request #481 from c0sogi/main
...
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Carlos Tejada
0756a2d3fb
Now the last token sent when stream=True
2023-07-19 22:47:14 -04:00
Andrei Betlen
b43917c144
Add functions parameters
2023-07-19 03:48:20 -04:00
Andrei Betlen
19ba9d3845
Use numpy arrays for logits_processors and stopping_criteria. Closes #491
2023-07-18 19:27:41 -04:00
c0sogi
1551ba10bd
Added RouteErrorHandler
for server
2023-07-16 14:57:39 +09:00
Andrei Betlen
8ab098e49d
Re-order Llama class params
2023-07-15 15:35:08 -04:00
Andrei Betlen
f0797a6054
Merge branch main into custom_rope
2023-07-15 15:11:01 -04:00
randoentity
3f8f276f9f
Add bindings for custom_rope
2023-07-10 17:37:46 +02:00
Andrei Betlen
a86bfdf0a5
bugfix: truncate completion max_tokens to fit context length by default
2023-07-09 18:13:29 -04:00
Andrei
5d756de314
Merge branch 'main' into add_unlimited_max_tokens
2023-07-08 02:37:38 -04:00
Andrei
b8e0bed295
Merge pull request #453 from wu-qing-157/main
...
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen
d6e6aad927
bugfix: fix compatibility bug with openai api on last token
2023-07-08 00:06:11 -04:00
Andrei Betlen
4f2b5d0b53
Format
2023-07-08 00:05:10 -04:00
Andrei Betlen
34c505edf2
perf: convert pointer to byref
2023-07-07 22:54:07 -04:00
Andrei Betlen
11eae75211
perf: avoid allocating new buffers during sampling
2023-07-07 19:28:53 -04:00
Andrei Betlen
a14d8a9b3f
perf: assign to candidates data structure instead
2023-07-07 18:58:43 -04:00
wu-qing-157
9e61661518
fix indexing token_logprobs after sorting
2023-07-07 10:18:49 +00:00
Andrei Betlen
e34f4414cf
Hotfix: logits_all bug
2023-06-29 00:57:27 -04:00
Andrei Betlen
a2ede37bd5
Load logits directly into scores buffer
2023-06-29 00:45:46 -04:00
Andrei Betlen
b95b0ffbeb
Use pre-allocated buffers to store input_ids and scores
2023-06-29 00:40:47 -04:00
Andrei Betlen
a5e059c053
Free model when llama is unloaded. Closes #434
2023-06-28 23:58:55 -04:00
Andrei Betlen
3379dc40a1
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-06-26 08:50:48 -04:00
Andrei Betlen
952228407e
Update llama.cpp
2023-06-26 08:50:38 -04:00
Andrei Betlen
b4a3db3e54
Update type signature
2023-06-26 08:50:30 -04:00
Andrei
5eb4ebb041
Merge branch 'main' into fix-state-pickle
2023-06-26 08:45:02 -04:00
samfundev
d788fb49bf
Only concatenate after all batches are done
2023-06-24 15:51:46 -04:00
Andrei
877ca6d016
Merge branch 'main' into fix-state-pickle
2023-06-23 15:13:07 -04:00
Andrei Betlen
d410f12fae
Update docs. Closes #386
2023-06-17 13:38:48 -04:00
imaprogrammer
fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception
2023-06-16 14:11:57 +05:30
Andrei Betlen
44b83cada5
Add low_vram parameter
2023-06-14 22:12:33 -04:00
Andrei
f568baeef1
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
...
Add support for `logit_bias` and `logit_bias_type` parameters
2023-06-14 21:49:56 -04:00
Okabintaro
10b0cb727b
fix: Make LLamaState pickable for disk cache
...
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
2023-06-13 12:03:31 +02:00
Andrei Betlen
21acd7901f
Re-enable cache
2023-06-10 12:22:31 -04:00
Tanner Hobson
eb7645b3ba
Add support for logit_bias and logit_bias_type parameters
2023-06-09 13:13:08 -04:00
Andrei Betlen
0da655b3be
Temporarily disable cache until save state bug is fixed.
2023-06-09 11:10:24 -04:00
Andrei Betlen
556c7edf47
Truncate max_tokens if it exceeds context length
2023-06-09 10:57:36 -04:00
Andrei Betlen
0c42168508
Fix cache implementation breaking changes
2023-06-08 13:19:23 -04:00
Andrei
0f0b447fa4
Merge pull request #289 from Maximilian-Winter/main
...
Diskcache implementation for llama state.
2023-06-06 17:03:03 -04:00
Andrei Betlen
8b4968ea67
Fix resize issue. Closes #330
2023-06-06 11:37:57 -04:00
Maximilian-Winter
29f9c9cca3
Added both LlamaChache classes Disk and RAM.
2023-05-31 22:33:56 +02:00
Maximilian Winter
9ea7a379d3
Merge branch 'abetlen:main' into main
2023-05-31 12:55:51 +02:00
Maximilian-Winter
719c3eae0a
Diskcache implementation for llama state.
2023-05-28 15:56:38 +02:00
Andrei Betlen
8f2b4456ad
Format
2023-05-26 22:04:31 -04:00
Andrei Betlen
84e313bd6e
Align dtype to match c structs
2023-05-26 22:02:16 -04:00
Andrei Betlen
66bcb8d70d
Merge branch 'main' into add-numpy-support
2023-05-26 20:25:03 -04:00
Andrei Betlen
8f35bddd7e
Fix stop sequence performance bug.
2023-05-26 20:23:49 -04:00