TK-Master
b8438f70b5
Added support for min_p ( #921 )
...
* Added support for min_p
My small contribution to this great project.
Ref: https://github.com/ggerganov/llama.cpp/pull/3841
Closes: https://github.com/abetlen/llama-cpp-python/issues/911
* Fix for negative temp (sample_softmax)
2023-11-20 23:21:33 -05:00
Andrei Betlen
a34d480141
Fix #929
2023-11-20 22:50:59 -05:00
Andrei Betlen
2c2afa320f
Update llama.cpp
2023-11-20 14:11:33 -05:00
Andrei Betlen
f2901d840e
Bump version
2023-11-14 14:10:00 -05:00
Andrei Betlen
01846a76b9
Bump version
2023-11-10 16:36:12 -05:00
Andrei Betlen
b7e60b66f4
Bump version
2023-11-10 06:21:24 -05:00
Andrei Betlen
6f0b0b1b84
Fix sampling bug when logits_all=False
2023-11-10 05:15:41 -05:00
Andrei Betlen
d9b38e3e3a
Potential bugfix for eval
2023-11-10 04:41:19 -05:00
Andrei Betlen
b84d76a844
Fix: add default stop sequence to chatml chat format
2023-11-10 04:24:48 -05:00
Andrei Betlen
1b376c62b7
Update functionary for new OpenAI API
2023-11-10 02:51:58 -05:00
Andrei Betlen
17da8fb446
Add missing tool_calls finish_reason
2023-11-10 02:51:06 -05:00
Andrei Betlen
770df34436
Add $ref and $defs support to json schema converter
2023-11-10 02:50:46 -05:00
Andrei Betlen
faeae181b1
Fix: json_schema_to_gbnf should take string dump of json schema as input
2023-11-10 02:50:17 -05:00
Andrei Betlen
e7962d2c73
Fix: default max_tokens matches openai api (16 for completion, max length for chat completion)
2023-11-10 02:49:27 -05:00
Andrei Betlen
b62c449839
Bugfix: missing response_format for functionary and llava chat handlers
2023-11-09 00:55:23 -05:00
Andrei Betlen
fd41ed3a90
Add set_seed to Llama class
2023-11-08 11:09:41 -05:00
Andrei Betlen
ca4cb88351
Fix destructor NoneType is not callable error
2023-11-08 11:05:45 -05:00
Andrei Betlen
01cb3a0381
Bump version
2023-11-08 00:54:54 -05:00
Andrei Betlen
b30b9c338b
Add JSON mode support. Closes #881
2023-11-08 00:07:16 -05:00
Andrei Betlen
4852a6a39c
Fix built in GBNF grammar rules
2023-11-08 00:06:22 -05:00
Andrei Betlen
64f5153c35
Add seed parameter to chat handlers
2023-11-07 23:41:29 -05:00
Andrei Betlen
86aeb9f3a1
Add seed parameter support for completion and chat_completion requests. Closes #884
2023-11-07 23:37:28 -05:00
Damian Stewart
aab74f0b2b
Multimodal Support (Llava 1.5) ( #821 )
...
* llava v1.5 integration
* Point llama.cpp to fork
* Add llava shared library target
* Fix type
* Update llama.cpp
* Add llava api
* Revert changes to llama and llama_cpp
* Update llava example
* Add types for new gpt-4-vision-preview api
* Fix typo
* Update llama.cpp
* Update llama_types to match OpenAI v1 API
* Update ChatCompletionFunction type
* Reorder request parameters
* More API type fixes
* Even More Type Updates
* Add parameter for custom chat_handler to Llama class
* Fix circular import
* Convert to absolute imports
* Fix
* Fix pydantic Jsontype bug
* Accept list of prompt tokens in create_completion
* Add llava1.5 chat handler
* Add Multimodal notebook
* Clean up examples
* Add server docs
---------
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-11-07 22:48:51 -05:00
Andrei Betlen
56171cf7bf
Bump version
2023-11-06 09:37:55 -05:00
Andrei Betlen
be0add1b2d
Fix type bug
2023-11-06 09:30:38 -05:00
Andrei Betlen
e214a58422
Refactor Llama class internals
2023-11-06 09:16:36 -05:00
Andrei Betlen
bbffdaebaa
Refactor autotokenizer format to reusable function
2023-11-06 09:07:27 -05:00
Joe
4ff8def4d0
#717 : Add support for Huggingface Autotokenizer ( #790 )
...
Co-authored-by: Andrei <abetlen@gmail.com>
2023-11-05 18:06:36 -05:00
earonesty
3580e2c5df
Update llama_chat_format.py ( #869 )
...
* Update llama_chat_format.py
properly formal llama2 with first-message prompt embedded
* Update llama_chat_format.py
2023-11-05 17:00:13 -05:00
Andrei Betlen
f0b30ef7dc
Update llama.cpp
2023-11-05 16:57:10 -05:00
Andrei Betlen
2ec043af76
Clean up stdout / stderr suppression
2023-11-03 13:02:15 -04:00
Andrei Betlen
4ea7027c41
Rename internal only module utils to _utils
2023-11-03 12:55:55 -04:00
Andrei Betlen
df9362eeea
Update llama.cpp
2023-11-03 11:34:50 -04:00
Andrei
3af7b21ff1
Add functionary support ( #784 )
...
* Add common grammars and json-schema-to-grammar utility function from llama.cpp
* Pass functions to format function
* Add basic functionary formatting
* Add LlamaChatHandler for more complex chat use cases
* Add function calling example notebook
* Add support for regular chat completions alongside function calling
2023-11-03 02:12:14 -04:00
Andrei
ab028cb878
Migrate inference to llama_batch and llama_decode api ( #795 )
...
* Add low-level batching notebook
* fix: tokenization of special characters: (#850 )
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
* Update CHANGELOG
* Cleanup
* Fix runner label
* Update notebook
* Use llama_decode and batch api
* Support logits_all parameter
---------
Co-authored-by: Antoine Lizee <antoine.lizee@gmail.com>
2023-11-02 20:13:57 -04:00
Andrei Betlen
8350de9a18
Bump version
2023-11-02 15:53:01 -04:00
Andrei Betlen
011b95d7f3
Fix name 'open' is not defined exception. Closes #860
2023-11-02 15:30:55 -04:00
Andrei Betlen
fa83cc5f9c
Update llama.cpp
...
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
Update llama.cpp
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
2023-11-02 14:28:15 -04:00
Antoine Lizee
4d4e0f11e2
fix: tokenization of special characters: ( #850 )
...
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
2023-11-02 14:28:14 -04:00
Andrei Betlen
6b3aa7fc8f
Bump version
2023-11-01 19:25:03 -04:00
Sujeendran Menon
7b136bb5b1
Fix for shared library not found and compile issues in Windows ( #848 )
...
* fix windows library dll name issue
* Updated README.md Windows instructions
* Update llama_cpp.py to handle different windows dll file versions
2023-11-01 18:55:57 -04:00
cebtenzzre
eefd76fe81
llama: fix exception in Llama.__del__ ( #846 )
2023-11-01 18:53:57 -04:00
David Ponce
3fc9147218
Iterate over tokens that should be biased rather than the entire vocabulary. ( #851 )
2023-11-01 18:53:47 -04:00
Marko Tasic
9c8f4dca5f
fixed Llama._create_completion suffix check, it can be either None or str instance ( #854 )
2023-11-01 18:52:50 -04:00
Daniel Thuerck
5f8f369d1b
Pass-Through grammar parameter in web server. ( #855 ) Closes #778
2023-11-01 18:51:12 -04:00
Adam Katora
25cb710281
Update llama_types.py ( #849 )
...
Minor typo fix, funcion -> function
2023-11-01 18:50:11 -04:00
Andrei Betlen
d808fd436c
Update llama.cpp
2023-10-31 21:29:35 -04:00
Andrei Betlen
53861c9e53
Update llama.cpp
2023-10-24 03:13:32 -04:00
gmcgoldr
09a8406c83
Fix streaming doesn't return finish reason ( #798 )
...
When streaming the yield that contains the finish can be skipped. This change ensures that yield isn't skipped.
2023-10-19 02:55:56 -04:00
Andrei Betlen
28c2b884e2
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-10-19 02:55:31 -04:00
Andrei Betlen
ff580031d2
Update llama.cpp
2023-10-19 02:55:08 -04:00
Xiaoyu Kevin Hu
a315128d66
update value check for n_gpu_layers field ( #826 )
2023-10-18 18:25:25 -04:00
Pierre Alexandre SCHEMBRI
10304d75fc
Make use of suppress_stdout_stderr when freeing model ( #803 )
2023-10-15 13:52:43 -04:00
Ma, Guokai
a1ac199980
Fix repeat greeting ( #808 )
...
* fix repeated greeting
* remove seperator between role and message
2023-10-15 13:52:21 -04:00
Eric Liu
b50166500e
Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES ( #820 )
...
* Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES
* reword
2023-10-15 13:51:51 -04:00
Andrei Betlen
d6a130a052
Print traceback on server error
2023-10-10 15:56:04 -04:00
Andrei Betlen
43dfe1e2ab
Update llama.cpp
2023-10-05 16:07:49 -04:00
Andrei Betlen
a7d17b8ac9
Update llama.cpp
2023-10-03 15:23:35 -04:00
Andrei Betlen
305482bd41
Add chatml chat format
2023-09-30 21:01:34 -04:00
Andrei Betlen
5ef5280ef9
Log server exceptions to stdout
2023-09-30 19:13:36 -04:00
Andrei Betlen
fab4bccc35
Bump version
2023-09-30 16:04:46 -04:00
Andrei Betlen
d696251fbe
Fix logits_all bug
2023-09-30 16:02:35 -04:00
Andrei Betlen
6ee413d79e
Bump version
2023-09-30 13:23:09 -04:00
Andrei Betlen
42bb721d64
Fix bug in embedding
2023-09-30 13:20:22 -04:00
Andrei Betlen
5d62d55a82
Bump version
2023-09-30 00:07:06 -04:00
Andrei Betlen
386c88b68e
Bump version
2023-09-29 20:07:31 -04:00
Andrei Betlen
d9bce17794
Update server params
2023-09-29 19:59:12 -04:00
Andrei Betlen
3720c739d4
Update llama.cpp
2023-09-29 19:58:21 -04:00
Andrei
3bca7708fb
Configurable Chat Formats ( #711 )
...
* Add configurable default chat completion format.
* Remove chat_template file to avoid circular import
* Update llama_types
* Add chat format
2023-09-29 19:52:04 -04:00
Josh XT
a945404b4a
Fix rope scaling defaults ( #767 )
...
* Fix rope scale with backwards compatibility
* Fix defaults
* Fix op
* Remove backwards compatibility
* Check single val
2023-09-29 16:03:57 -04:00
Andrei Betlen
1a1c3dc418
Update llama.cpp
2023-09-28 22:42:03 -04:00
Andrei Betlen
4177ae6d34
Bump version
2023-09-25 14:38:38 -04:00
Viacheslav/Slava Tradunsky
3d5e5b1c04
Adds openai-processing-ms response header ( #748 )
2023-09-25 13:55:58 -04:00
Andrei Betlen
dbca136fea
Update llama_types and names to match openai api
2023-09-20 15:38:26 -04:00
Andrei Betlen
38e34c97f0
Update llama.cpp
2023-09-18 16:11:27 -04:00
Andrei Betlen
8d75016549
Install required runtime dlls to package directory on windows
2023-09-16 14:57:49 -04:00
Andrei Betlen
acf18fcdf0
Bump version
2023-09-15 14:22:21 -04:00
Andrei Betlen
b047b3034e
Remove confusing helpstring from server cli args. Closes #719
2023-09-15 14:09:43 -04:00
Andrei Betlen
24fec0b242
Bump version
2023-09-14 18:33:08 -04:00
Andrei Betlen
8474665625
Update base_path to fix issue resolving dll in windows isolation container.
2023-09-14 14:51:43 -04:00
Andrei Betlen
507bcc7171
Bump version
2023-09-13 23:15:23 -04:00
Andrei Betlen
0449d29b9f
Fix boolean env vars and cli arguments
2023-09-13 23:09:57 -04:00
earonesty
58a6e42cc0
Update app.py ( #705 )
2023-09-13 23:01:34 -04:00
Andrei Betlen
f4090a0bb2
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
2023-09-13 23:00:43 -04:00
Andrei Betlen
c999325e8e
Fix boolean cli flags
2023-09-13 22:56:10 -04:00
Andrei Betlen
4daf77e546
Format
2023-09-13 21:23:23 -04:00
Andrei Betlen
2920c4bf7e
Update server params. Added lora_base, lora_path, low_vram, and main_gpu. Removed rms_norm_eps and n_gqa (deprecated in llama.cpp)
2023-09-13 21:23:13 -04:00
Andrei Betlen
6a20293fc2
Reorder init params to match llama.cpp order
2023-09-13 21:20:26 -04:00
Andrei Betlen
c8f9b8a734
Explicitly make all init params other than model_path into keyword only params
2023-09-13 21:19:47 -04:00
Andrei Betlen
a68f9e2791
Add kwargs to init to catch extra params
2023-09-13 21:19:02 -04:00
Andrei Betlen
9e345a47a2
remove print
2023-09-13 21:12:27 -04:00
Andrei Betlen
517f9ed80b
Convert missed llama.cpp constants into standard python types
2023-09-13 21:11:52 -04:00
Andrei Betlen
c4c440ba2d
Fix tensor_split cli option
2023-09-13 20:00:42 -04:00
Andrei Betlen
203ede4ba2
Bump version
2023-09-13 18:07:08 -04:00
Andrei Betlen
759405c84b
Fix issue with Literal and Optional cli arguments not working. Closes #702
2023-09-13 18:06:12 -04:00
Devrim
da9df78db0
Add X-Request-ID request header for mirroring custom IDs. ( #703 )
2023-09-13 16:18:31 -04:00
Andrei Betlen
8e13520796
Bump version
2023-09-13 01:47:58 -04:00
Andrei Betlen
2787663a25
Bump version
2023-09-12 21:00:01 -04:00
Andrei Betlen
6e89775759
Bump version
2023-09-12 18:57:01 -04:00
Andrei Betlen
bb4e67e7aa
Using dynamic version
2023-09-12 18:56:36 -04:00
Andrei Betlen
1910793f56
Merge branch 'main' into v0.2-wip
2023-09-12 16:43:32 -04:00
Andrei Betlen
c7901f1141
Bump version
2023-09-12 16:16:40 -04:00
janvdp
33ce931cce
merge upstream
2023-09-09 21:21:04 +02:00
Andrei Betlen
d3f63211ef
Update llama.cpp
2023-09-09 12:12:32 -04:00
janvdp
da0fdafc32
import version in __init__.py
2023-09-05 21:09:28 +02:00
janvdp
6e8e64d09a
add version file
2023-09-05 21:09:08 +02:00
Andrei Betlen
186626d58e
Update llama.cpp
2023-09-01 14:26:13 -04:00
Andrei Betlen
47de3ab104
Update llama.cpp
2023-08-29 07:36:20 -04:00
Andrei Betlen
3f76e1de52
cjk pr minor cleanup
2023-08-29 07:21:59 -04:00
Andrei
bae44ec8bf
Merge pull request #309 from MeouSker77/fix-CJK
...
Fix CJK and emoji stream output
2023-08-29 06:58:10 -04:00
Andrei Betlen
e0dcbc28a1
Update llama.cpp
2023-08-28 10:33:45 -04:00
Andrei Betlen
4887973c22
Update llama.cpp
2023-08-27 12:59:20 -04:00
Andrei Betlen
3a29d65f45
Update llama.cpp
2023-08-26 23:36:24 -04:00
Andrei Betlen
5de8009706
Add copilot-codex completions endpoint for drop-in copilot usage
2023-08-25 17:49:14 -04:00
Andrei Betlen
ac47d55577
Merge branch 'main' into v0.2-wip
2023-08-25 15:45:22 -04:00
Andrei Betlen
ef23d1e545
Update llama.cpp
2023-08-25 14:35:53 -04:00
Andrei Betlen
48cf43b427
Use _with_model variants for tokenization
2023-08-25 13:43:16 -04:00
Andrei Betlen
8ac59465b9
Strip leading space when de-tokenizing.
2023-08-25 04:56:48 -04:00
Andrei Betlen
c2d1deaa8a
Update llama.cpp
2023-08-24 18:01:42 -04:00
Andrei Betlen
db982a861f
Fix
2023-08-24 01:01:12 -04:00
Andrei Betlen
4ed632c4b3
Remove deprecated params
2023-08-24 01:01:05 -04:00
Andrei Betlen
cf405f6764
Merge branch 'main' into v0.2-wip
2023-08-24 00:30:51 -04:00
Andrei Betlen
bbbf0f4fc4
Update llama.cpp
2023-08-24 00:17:00 -04:00
Andrei Betlen
e632c59fa0
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-08-17 20:53:04 -04:00
c0sogi
a240aa6b25
Fix typos in llama_grammar
2023-08-17 21:00:44 +09:00
Andrei Betlen
620cd2fd69
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-08-14 22:41:47 -04:00
Andrei Betlen
5788f1f2b2
Remove unnused import
2023-08-14 22:41:37 -04:00
Andrei
6dfb98117e
Merge pull request #600 from Vuizur/main
...
Add py.typed to conform with PEP 561
2023-08-14 22:40:41 -04:00
Andrei
b99e758045
Merge pull request #604 from aliencaocao/main-1
...
Add doc string for n_gpu_layers argument and make -1 offload all layers
2023-08-14 22:40:10 -04:00
Andrei Betlen
b345d60987
Update llama.cpp
2023-08-14 22:33:30 -04:00
Billy Cao
c471871d0b
make n_gpu_layers=-1 offload all layers
2023-08-13 11:21:28 +08:00
Billy Cao
d018c7b01d
Add doc string for n_gpu_layers argument
2023-08-12 18:41:47 +08:00
Hannes Krumbiegel
17dd7fa8e0
Add py.typed
2023-08-11 09:58:48 +02:00
MeouSker77
88184ed217
fix CJK output again
2023-08-09 22:04:35 +08:00
Andrei Betlen
66fb0345e8
Move grammar to function call argument
2023-08-08 15:08:54 -04:00
Andrei Betlen
1e844d3238
fix
2023-08-08 15:07:28 -04:00
Andrei Betlen
843b7ccd90
Merge branch 'main' into c0sogi/main
2023-08-08 14:43:02 -04:00
Andrei Betlen
d015bdb4f8
Add mul_mat_q option
2023-08-08 14:35:06 -04:00
Andrei Betlen
f6a7850e1a
Update llama.cpp
2023-08-08 14:30:58 -04:00
c0sogi
0d7d2031a9
prevent memory access error by llama_grammar_free
2023-08-07 17:02:33 +09:00
c0sogi
b07713cb9f
reset grammar for every generation
2023-08-07 15:16:25 +09:00
c0sogi
418aa83b01
Added grammar based sampling
2023-08-07 02:21:37 +09:00
c0sogi
ac188a21f3
Added low level grammar API
2023-08-05 14:43:35 +09:00
Andrei Betlen
ce57920e60
Suppress llama.cpp output when loading model.
2023-07-28 14:45:18 -04:00
Andrei Betlen
a9b9f0397c
Format
2023-07-28 01:53:08 -04:00
Andrei Betlen
abc538fcd5
fix: annoying bug where attribute exceptions were droining out file not found exceptions
2023-07-28 01:43:00 -04:00
Shouyi Wang
426dbfe3f4
Change tensor_split from array to pointer
2023-07-25 18:29:59 +10:00
Andrei Betlen
078902a6fe
Add llama_grammar_accept_token
2023-07-24 15:55:26 -04:00
Andrei Betlen
bf901773b0
Add llama_sample_grammar
2023-07-24 15:42:31 -04:00
Andrei Betlen
1b6997d69f
Convert constants to python types and allow python types in low-level api
2023-07-24 15:42:07 -04:00
Andrei Betlen
343480364f
Merge branch 'main' into v0.2-wip
2023-07-24 15:26:08 -04:00
Andrei Betlen
11dd2bf382
Add temporary rms_norm_eps parameter
2023-07-24 14:09:24 -04:00
Andrei Betlen
8cd64d4ac3
Add rms_eps_norm
2023-07-24 13:52:12 -04:00
bretello
0f09f10e8c
add support for llama2 70b
2023-07-24 19:38:24 +02:00
Andrei Betlen
77c9f496b0
Merge branch 'main' into v0.2-wip
2023-07-24 13:19:54 -04:00
Andrei Betlen
401309d11c
Revert "Merge pull request #521 from bretello/main"
...
This reverts commit 07f0f3a386
, reversing
changes made to d8a3ddbb1c
.
2023-07-24 13:11:10 -04:00
Andrei
07f0f3a386
Merge pull request #521 from bretello/main
...
raise exception when `llama_load_model_from_file` fails
2023-07-24 13:09:28 -04:00
Andrei Betlen
d8a3ddbb1c
Update llama.cpp
2023-07-24 13:08:06 -04:00
Andrei Betlen
985d559971
Update llama.cpp
2023-07-24 13:04:34 -04:00
bretello
8be7d67f7e
raise exception when llama_load_model_from_file
fails
2023-07-24 14:42:37 +02:00
Andrei Betlen
436036aa67
Merge branch 'main' into v0.2-wip
2023-07-21 12:42:38 -04:00
Andrei Betlen
b83728ad1e
Update llama.cpp
2023-07-21 12:33:27 -04:00
Andrei Betlen
0538ba1dab
Merge branch 'main' into v0.2-wip
2023-07-20 19:06:26 -04:00
Andrei Betlen
01435da740
Update llama.cpp
2023-07-20 18:54:25 -04:00
Andrei Betlen
28a111704b
Fix compatibility with older python versions
2023-07-20 18:52:10 -04:00
Andrei Betlen
d10ce62714
Revert ctypes argtype change
2023-07-20 18:51:53 -04:00
Andrei
365d9a4367
Merge pull request #481 from c0sogi/main
...
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Vinicius
a8551477f5
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float]
2023-07-20 17:29:11 -03:00
Carlos Tejada
0756a2d3fb
Now the last token sent when stream=True
2023-07-19 22:47:14 -04:00
Andrei Betlen
0b121a7456
Format
2023-07-19 03:48:27 -04:00
Andrei Betlen
b43917c144
Add functions parameters
2023-07-19 03:48:20 -04:00
Andrei Betlen
19ba9d3845
Use numpy arrays for logits_processors and stopping_criteria. Closes #491
2023-07-18 19:27:41 -04:00
shutup
5ed8bf132f
expose RoPE param to server start
2023-07-18 16:34:36 +08:00
c0sogi
1551ba10bd
Added RouteErrorHandler
for server
2023-07-16 14:57:39 +09:00
Andrei Betlen
8ab098e49d
Re-order Llama class params
2023-07-15 15:35:08 -04:00
Andrei Betlen
e4f9db37db
Fix context_params struct layout
2023-07-15 15:34:55 -04:00
Andrei Betlen
f0797a6054
Merge branch main into custom_rope
2023-07-15 15:11:01 -04:00
randoentity
3f8f276f9f
Add bindings for custom_rope
2023-07-10 17:37:46 +02:00
Andrei Betlen
a86bfdf0a5
bugfix: truncate completion max_tokens to fit context length by default
2023-07-09 18:13:29 -04:00
Andrei Betlen
6f70cc4b7d
bugfix: pydantic settings missing / changed fields
2023-07-09 18:03:31 -04:00
Andrei
5d756de314
Merge branch 'main' into add_unlimited_max_tokens
2023-07-08 02:37:38 -04:00
Andrei
b8e0bed295
Merge pull request #453 from wu-qing-157/main
...
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen
d6e6aad927
bugfix: fix compatibility bug with openai api on last token
2023-07-08 00:06:11 -04:00
Andrei Betlen
4f2b5d0b53
Format
2023-07-08 00:05:10 -04:00
Andrei Betlen
34c505edf2
perf: convert pointer to byref
2023-07-07 22:54:07 -04:00
Andrei Betlen
52753b77f5
Upgrade fastapi to 0.100.0 and pydantic v2
2023-07-07 21:38:46 -04:00
Andrei Betlen
11eae75211
perf: avoid allocating new buffers during sampling
2023-07-07 19:28:53 -04:00
Andrei Betlen
a14d8a9b3f
perf: assign to candidates data structure instead
2023-07-07 18:58:43 -04:00
wu-qing-157
9e61661518
fix indexing token_logprobs after sorting
2023-07-07 10:18:49 +00:00
Andrei Betlen
57d8ec3899
Add setting to control request interruption
2023-07-07 03:37:23 -04:00
Andrei Betlen
4c7cdcca00
Add interruptible streaming requests for llama-cpp-python server. Closes #183
2023-07-07 03:04:17 -04:00
Andrei Betlen
98ae4e58a3
Update llama.cpp
2023-07-06 17:57:56 -04:00
Andrei Betlen
b994296c75
Update llama.cpp
2023-07-05 01:00:14 -04:00
Andrei Betlen
c67f786360
Update llama.cpp
2023-06-29 01:08:15 -04:00
Andrei Betlen
e34f4414cf
Hotfix: logits_all bug
2023-06-29 00:57:27 -04:00
Andrei Betlen
a2ede37bd5
Load logits directly into scores buffer
2023-06-29 00:45:46 -04:00
Andrei Betlen
b95b0ffbeb
Use pre-allocated buffers to store input_ids and scores
2023-06-29 00:40:47 -04:00
Andrei Betlen
a5e059c053
Free model when llama is unloaded. Closes #434
2023-06-28 23:58:55 -04:00
Andrei Betlen
3379dc40a1
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-06-26 08:50:48 -04:00
Andrei Betlen
952228407e
Update llama.cpp
2023-06-26 08:50:38 -04:00
Andrei Betlen
b4a3db3e54
Update type signature
2023-06-26 08:50:30 -04:00
Andrei
5eb4ebb041
Merge branch 'main' into fix-state-pickle
2023-06-26 08:45:02 -04:00
samfundev
d788fb49bf
Only concatenate after all batches are done
2023-06-24 15:51:46 -04:00
Andrei
877ca6d016
Merge branch 'main' into fix-state-pickle
2023-06-23 15:13:07 -04:00
Alexey
282698b6d3
server: pass seed param from command line to llama
2023-06-23 00:19:24 +04:00
Andrei Betlen
e37798777e
Update llama.cpp
2023-06-20 11:25:10 -04:00
Andrei Betlen
d410f12fae
Update docs. Closes #386
2023-06-17 13:38:48 -04:00
Andrei Betlen
9f528f4715
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-06-17 13:37:17 -04:00
Andrei Betlen
d7153abcf8
Update llama.cpp
2023-06-16 23:11:14 -04:00
imaprogrammer
fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception
2023-06-16 14:11:57 +05:30
Andrei Betlen
1e20be6d0c
Add low_vram to server settings
2023-06-14 22:13:42 -04:00
Andrei Betlen
44b83cada5
Add low_vram parameter
2023-06-14 22:12:33 -04:00
Andrei Betlen
f7c5cfaf50
Format server options
2023-06-14 22:08:28 -04:00
Andrei Betlen
9c41a3e990
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-06-14 21:50:43 -04:00
Andrei
f568baeef1
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
...
Add support for `logit_bias` and `logit_bias_type` parameters
2023-06-14 21:49:56 -04:00
Andrei Betlen
f27393ab7e
Add additional verbose logs for cache
2023-06-14 21:46:48 -04:00
Andrei Betlen
4cefb70cd0
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-06-14 21:40:19 -04:00
Andrei Betlen
715f98c591
Update llama.cpp
2023-06-14 21:40:13 -04:00
Okabintaro
10b0cb727b
fix: Make LLamaState pickable for disk cache
...
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
2023-06-13 12:03:31 +02:00
Gabor
3129a0e7e5
correction to add back environment variable support <3 docker
2023-06-11 01:11:24 +01:00
Gabor
3ea31930e5
fixes abetlen/llama-cpp-python #358
2023-06-11 00:58:08 +01:00
Andrei Betlen
21acd7901f
Re-enable cache
2023-06-10 12:22:31 -04:00
Andrei Betlen
6639371407
Update llama.cpp
2023-06-10 12:17:38 -04:00
Tanner Hobson
eb7645b3ba
Add support for logit_bias and logit_bias_type parameters
2023-06-09 13:13:08 -04:00
Andrei Betlen
0da655b3be
Temporarily disable cache until save state bug is fixed.
2023-06-09 11:10:24 -04:00
Andrei Betlen
556c7edf47
Truncate max_tokens if it exceeds context length
2023-06-09 10:57:36 -04:00
Andrei Betlen
0c42168508
Fix cache implementation breaking changes
2023-06-08 13:19:23 -04:00
Andrei Betlen
607d217caa
Allow both .so and .dylib extensions for macos
2023-06-08 00:27:19 -04:00
Andrei
0f0b447fa4
Merge pull request #289 from Maximilian-Winter/main
...
Diskcache implementation for llama state.
2023-06-06 17:03:03 -04:00
Andrei
d508573fb4
Merge pull request #328 from spirilis/mirostat
...
Added mirostat support for completions, chat completions API
2023-06-06 16:58:23 -04:00
Andrei Betlen
aad4b17f52
Update llama.cpp
2023-06-06 16:23:55 -04:00
Andrei Betlen
8b4968ea67
Fix resize issue. Closes #330
2023-06-06 11:37:57 -04:00
Eric B
9b1c9e902c
Added mirostat support for completions, chat completions API
2023-06-05 22:37:11 -04:00
Andrei Betlen
7b57420ea9
Update llama.cpp
2023-06-05 18:17:29 -04:00
Maximilian-Winter
29f9c9cca3
Added both LlamaChache classes Disk and RAM.
2023-05-31 22:33:56 +02:00
Maximilian Winter
9ea7a379d3
Merge branch 'abetlen:main' into main
2023-05-31 12:55:51 +02:00
Andrei
49fe9395a1
Merge pull request #277 from abetlen/add-numpy-support
...
Use numpy for internal buffers
2023-05-29 20:59:30 -04:00
Maximilian-Winter
719c3eae0a
Diskcache implementation for llama state.
2023-05-28 15:56:38 +02:00
Andrei Betlen
80066f0b80
Use async routes
2023-05-27 09:12:58 -04:00
Andrei Betlen
c2b59a5f59
Import unnused import
2023-05-26 22:59:29 -04:00
Andrei Betlen
8f2b4456ad
Format
2023-05-26 22:04:31 -04:00
Andrei Betlen
84e313bd6e
Align dtype to match c structs
2023-05-26 22:02:16 -04:00
Andrei Betlen
66bcb8d70d
Merge branch 'main' into add-numpy-support
2023-05-26 20:25:03 -04:00
Andrei Betlen
8f35bddd7e
Fix stop sequence performance bug.
2023-05-26 20:23:49 -04:00
Andrei Betlen
7fc7bc30e7
Remove usage of eval_tokens for cache check
2023-05-26 20:12:05 -04:00
Andrei Betlen
fe331ec589
Replace eval_logits and eval_tokens with numpy arrays
2023-05-26 20:03:31 -04:00
Andrei Betlen
8eb9769f78
Add support for numpy
2023-05-26 16:12:45 -04:00
Andrei Betlen
4c1b7f7a76
Bugfix for logits_processor and stopping_criteria
2023-05-26 10:25:28 -04:00
Andrei Betlen
433a2e3e8a
Add extra logits_processor and stopping_criteria
2023-05-26 03:13:24 -04:00
Andrei Betlen
f74b90ed67
Fix streaming hang on last token when cache is on.
2023-05-26 03:03:01 -04:00
Andrei Betlen
5be8354e11
Added tokenizer
2023-05-26 03:00:51 -04:00
Andrei Betlen
8fa2ef1959
Format
2023-05-26 03:00:35 -04:00
Andrei Betlen
6bd1075291
Merge branch 'Maximilian-Winter/main' into main
2023-05-26 02:56:11 -04:00
Andrei Betlen
ca01f98e09
Add LlamaTokenizer class
2023-05-25 14:11:33 -04:00
Andrei Betlen
1d247e0f35
Add StoppingCriteria and LogitsProcessor to generate to match huggingface API
2023-05-25 14:04:54 -04:00
Maximilian-Winter
c2585b6889
Fixed list elements typing
2023-05-25 10:54:08 +02:00
Maximilian-Winter
da463e6c8c
Added types to logit processor list and stop criteria list
2023-05-25 09:07:16 +02:00
Maximilian-Winter
c05fcdf42f
Fixed none value of logits processors.
2023-05-24 22:02:06 +02:00
Maximilian-Winter
5bb780d455
Implemented logit processors and stop criteria's
2023-05-24 21:55:44 +02:00
Andrei Betlen
fab064ded9
Remove unnecessary ffi calls
2023-05-23 17:56:21 -04:00
Andrei Betlen
0adb9ec37a
Use model_name and index in response
2023-05-21 21:30:03 -04:00
Andrei Betlen
922b5b2bfd
Merge branch 'main' into server-embedding
2023-05-21 21:21:38 -04:00
Andrei Betlen
cd102e9da1
Cache shared library function calls for static tokens
2023-05-21 19:18:56 -04:00
Andrei Betlen
b895511cca
Fix penalize_nl
2023-05-21 18:38:06 -04:00
Andrei Betlen
03e2947b03
Fix unnecessary memory allocation while sampling
2023-05-21 18:36:34 -04:00
Andrei Betlen
fafe47114c
Update llama.cpp
2023-05-21 17:47:21 -04:00
Andrei Betlen
76b1d2cd20
Change properties to functions to match token functions
2023-05-20 08:24:06 -04:00
Andrei Betlen
a7ba85834f
Add n_ctx, n_vocab, and n_embd properties
2023-05-20 08:13:41 -04:00
Simon Chabot
e783f1c191
feat: make embedding support list of string as input
...
makes the /v1/embedding route similar to OpenAI api.
2023-05-20 01:23:32 +02:00
Andrei Betlen
01a010be52
Fix llama_cpp and Llama type signatures. Closes #221
2023-05-19 11:59:33 -04:00
Andrei Betlen
a8cd169251
Bugfix: Stop sequences can be strings
2023-05-19 03:15:08 -04:00
Andrei Betlen
17d4271b04
Fix logprobs for completions and implement for streaming logprobs.
2023-05-19 02:20:27 -04:00
Andrei Betlen
a634a2453b
Allow first logprob token to be null to match openai api
2023-05-19 02:04:57 -04:00
Andrei Betlen
dc39cc0fa4
Use server sent events function for streaming completion
2023-05-19 02:04:30 -04:00
Andrei Betlen
f0ec6e615e
Stream tokens instead of text chunks
2023-05-18 11:35:59 -04:00
Andrei Betlen
21d8f5fa9f
Remove unnused union
2023-05-18 11:35:15 -04:00
Andrei Betlen
61d58e7b35
Check for CUDA_PATH before adding
2023-05-17 15:26:38 -04:00
Andrei Betlen
7c95895626
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-05-17 15:19:32 -04:00
Aneesh Joy
e9794f91f2
Fixd CUBLAS dll load issue in Windows
2023-05-17 18:04:58 +01:00
Andrei Betlen
4f342795e5
Update token checks
2023-05-17 03:35:13 -04:00
Andrei Betlen
f5c2f998ab
Format
2023-05-17 02:00:39 -04:00
Andrei Betlen
d28b753ed2
Implement penalize_nl
2023-05-17 01:53:26 -04:00
Andrei Betlen
f11e2a781c
Fix last_n_tokens_size
2023-05-17 01:42:51 -04:00
Andrei Betlen
7e55244540
Fix top_k value. Closes #220
2023-05-17 01:41:42 -04:00
Andrei Betlen
a7c9e38287
Update variable name
2023-05-16 18:07:25 -04:00
Andrei Betlen
a3352923c7
Add model_alias option to override model_path in completions. Closes #39
2023-05-16 17:22:00 -04:00
Andrei Betlen
a65125c0bd
Add sampling defaults for generate
2023-05-16 09:35:50 -04:00
Andrei Betlen
cbac19bf24
Add winmode arg only on windows if python version supports it
2023-05-15 09:15:01 -04:00
Andrei Betlen
c804efe3f0
Fix obscure Wndows DLL issue. Closes #208
2023-05-14 22:08:11 -04:00
Andrei Betlen
cdf59768f5
Update llama.cpp
2023-05-14 00:04:22 -04:00
Andrei Betlen
7a536e86c2
Allow model to tokenize strings longer than context length and set add_bos. Closes #92
2023-05-12 14:28:22 -04:00
Andrei Betlen
8740ddc58e
Only support generating one prompt at a time.
2023-05-12 07:21:46 -04:00
Andrei Betlen
8895b9002a
Revert "llama_cpp server: prompt is a string". Closes #187
...
This reverts commit b9098b0ef7
.
2023-05-12 07:16:57 -04:00
Andrei Betlen
7be584fe82
Add missing tfs_z paramter
2023-05-11 21:56:19 -04:00
Andrei Betlen
cdeaded251
Bugfix: Ensure logs are printed when streaming
2023-05-10 16:12:17 -04:00
Lucas Doyle
02e8a018ae
llama_cpp server: document presence_penalty and frequency_penalty, mark as supported
2023-05-09 16:25:00 -07:00
Andrei Betlen
d957422bf4
Implement sampling as in llama.cpp main example
2023-05-08 21:21:25 -04:00
Andrei Betlen
93a9019bb1
Merge branch 'main' of github.com:abetlen/llama_cpp_python into Maximilian-Winter/main
2023-05-08 19:57:09 -04:00
Andrei Betlen
82d138fe54
Fix: default repeat_penalty
2023-05-08 18:49:11 -04:00
Andrei Betlen
29f094bbcf
Bugfix: not falling back to environment variables when default is value is set.
2023-05-08 14:46:25 -04:00