Commit graph

549 commits

Author SHA1 Message Date
Andrei Betlen
d9a1d90fd7 Fix typo 2023-12-22 15:12:27 -05:00
Andrei Betlen
37556bf9c4 Bump version 2023-12-22 14:55:58 -05:00
Andrei Betlen
6d8bc090f9 fix: inccorect bindings for kv override. Based on #1011 2023-12-22 14:52:20 -05:00
Andrei Betlen
522aecb868 docs: add server config docs 2023-12-22 14:37:24 -05:00
Andrei Betlen
6473796343 Update llama.cpp 2023-12-22 14:10:34 -05:00
swg
4b01a873ef
server: Support none defaulting to infinity for completions (#111)
* Support defaulting to infinity or -1 for chat completions

* Check if completion_tokens is none in error handler.

* fix: max_tokens in create completion should match openai spec

* Fix __call__

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-12-22 14:05:13 -05:00
Dave
12b7f2f4e9
[Feat] Multi model support (#931)
* Update Llama class to handle chat_format & caching

* Add settings.py

* Add util.py & update __main__.py

* multimodel

* update settings.py

* cleanup

* delete util.py

* Fix /v1/models endpoint

* MultiLlama now iterable, app check-alive on "/"

* instant model init if file is given

* backward compability

* revert model param mandatory

* fix error

* handle individual model config json

* refactor

* revert chathandler/clip_model changes

* handle chat_handler in MulitLlama()

* split settings into server/llama

* reduce global vars

* Update LlamaProxy to handle config files

* Add free method to LlamaProxy

* update arg parsers & install server alias

* refactor cache settings

* change server executable name

* better var name

* whitespace

* Revert "whitespace"

This reverts commit bc5cf51c64a95bfc9926e1bc58166059711a1cd8.

* remove exe_name

* Fix merge bugs

* Fix type annotations

* Fix type annotations

* Fix uvicorn app factory

* Fix settings

* Refactor server

* Remove formatting fix

* Format

* Use default model if not found in model settings

* Fix

* Cleanup

* Fix

* Fix

* Remove unnused CommandLineSettings

* Cleanup

* Support default name for copilot-codex models

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-12-22 05:51:25 -05:00
Andrei Betlen
4a85442c35 Update llama.cpp 2023-12-22 00:12:37 -05:00
twaka
2f03fb0231
fix text_offset of multi-token characters (#1037)
* fix text_offsets for bytes tokens

* fix
2023-12-22 00:03:29 -05:00
docmeth02
33cc623346
Implement openai api compatible authentication (#1010) 2023-12-21 13:44:49 -05:00
Andrei Betlen
a05b4da80a fix: float32 is not JSON serializable when streaming logits. 2023-12-18 18:40:36 -05:00
Andrei Betlen
7df6c32544 Fix type annotations 2023-12-18 18:14:53 -05:00
Andrei Betlen
b703aad79e Fix type annotation 2023-12-18 18:13:37 -05:00
Andrei Betlen
d0aedfcff6 Fix type annotation 2023-12-18 18:12:49 -05:00
Eduard Christian Dumitrescu
2993936b10
Fix ctypes definitions of llama_kv_cache_view_update and llama_kv_cache_view_free. (#1028) 2023-12-18 18:11:26 -05:00
Andrei Betlen
5e863d8a3b Bump version 2023-12-18 16:09:18 -05:00
Andrei Betlen
095c650006 Add offload_kqv option to llama and server 2023-12-18 15:36:09 -05:00
Andrei Betlen
472b344ae3 Remove unnused import 2023-12-18 15:32:40 -05:00
kddubey
6b2e0e05b4
perf: Don't convert logprobs arrays to lists (#1021) 2023-12-18 14:28:12 -05:00
Brandon Roberts
62944df142
Bugfix: Remove f16_kv, add offload_kqv field (#1019)
F16_KV appears to have been removed here: af99c6fbfc

This addresses two issues:

 - #995 which just requests to add the KV cache offloading param
 - #1006 a NULL ptr exception when using the embeddings (introduced by
   leaving f16_kv in the fields struct)
2023-12-18 14:27:11 -05:00
Daniele Morotti
f1c631dc53
Bug fixed with n_ctx=0 (#1015)
If the n_ctx is set to 0 the code should use the maximum context length of the selected model, but it didn't work. There was a problem with the initialization of this parameter and a related problem with 'n_batch'.
2023-12-16 18:59:50 -05:00
kddubey
5a8944672f
Fix logits_to_logprobs for 2-D and 3-D logits (#1002)
* Fix logits_to_logprobs for 2-D and 3-D logits

* Set dtype to single

* Test size
2023-12-16 18:59:26 -05:00
Andrei Betlen
534b1ea9b5 Update llama.cpp 2023-12-16 18:57:43 -05:00
Andrei Betlen
cbce061ffd Bump version 2023-12-13 21:52:29 -05:00
yhfgyyf
8b4db732bd
Add qwen chat format (#1005) 2023-12-13 21:43:43 -05:00
Andrei Betlen
690c563b60 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-12-13 21:43:19 -05:00
Andrei Betlen
c0fc0a1e82 Update llama.cpp 2023-12-13 21:43:16 -05:00
Radoslav Gerganov
8e44a32075
Add support for running the server with SSL (#994) 2023-12-11 20:47:11 -05:00
Tanner Hobson
ef22e478db
Replace logits_to_logprobs implementation with numpy equivalent to llama.cpp (#991)
See #990. This change makes the logits_to_logprobs function equivalent to the version in the llama.cpp repository. It uses numpy so it's much faster than the previous version.
2023-12-11 20:46:27 -05:00
zocainViken
ac35f68e4d
Fix UnsupportedOperation: fileno in suppress_stdout_stderr (#961)
* bug fixing

* llava from readme got this error: UnsupportedOperation: fileno   quick fix by checking hasattr

* multi modal params fix: add logits = True -> to make llava work

* multi modal params fix: add logits = True -> to make llava work

---------

Co-authored-by: Andrei <abetlen@gmail.com>
2023-12-11 20:44:51 -05:00
chiensen
b938cccf05
Add Pygmalion chat format (#986) 2023-12-11 20:44:04 -05:00
Andrei Betlen
c1e73e73a3 Bump version 2023-12-11 10:26:42 -05:00
Andrei Betlen
ec26f364cc Remove f16_kv 2023-12-11 10:25:37 -05:00
Andrei Betlen
f1edc66b21 Update llama.cpp 2023-12-11 10:21:35 -05:00
kddubey
b069d06346
Fix #891 (#952) 2023-11-29 05:39:52 -05:00
Andrei Betlen
ad963a0961 Bump version 2023-11-28 04:58:20 -05:00
Andrei Betlen
e3941d9c67 Make building llava optional 2023-11-28 04:55:21 -05:00
Andrei Betlen
7f3704b896 Bump version 2023-11-27 19:14:25 -05:00
Andrei Betlen
396dbf0b2b docs: Improve low-level docstrings 2023-11-27 19:03:02 -05:00
Andrei Betlen
a928893d03 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-11-26 15:57:13 -05:00
Andrei Betlen
6308f21d5e docs: Update Llama docs 2023-11-26 15:56:40 -05:00
Gardner Bickford
c2d63a7148
fix: Typo in the Open Orca chat format #874 (#947) 2023-11-26 15:39:18 -05:00
Andrei Betlen
f03a38e62a Update llama.cpp 2023-11-26 15:38:22 -05:00
Andrei Betlen
1a7bf2037b docs: Update openapi endpoint names 2023-11-24 03:39:29 -05:00
Andrei Betlen
4026166e68 docs: Update completion and chat_completion parameter docstrings 2023-11-24 03:24:19 -05:00
Andrei Betlen
8c3aa7858b Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-11-24 00:15:36 -05:00
Andrei Betlen
de2e2bc083 misc fix verbose printing in functionary model 2023-11-23 20:14:23 -05:00
Andrei Betlen
36048d46af Update llama.cpp 2023-11-23 16:26:00 -05:00
mrfakename
d68fc07b1b
Add Zephyr format (#937) 2023-11-23 01:20:08 -05:00
caiyesd
4184835078
Add chat format to support baichuan (#938)
Signed-off-by: caiyesd <caiyesd@gmail.com>
2023-11-23 01:19:50 -05:00