Brandon Roberts
62944df142
Bugfix: Remove f16_kv, add offload_kqv field ( #1019 )
...
F16_KV appears to have been removed here: af99c6fbfc
This addresses two issues:
- #995 which just requests to add the KV cache offloading param
- #1006 a NULL ptr exception when using the embeddings (introduced by
leaving f16_kv in the fields struct)
2023-12-18 14:27:11 -05:00
Radoslav Gerganov
8e44a32075
Add support for running the server with SSL ( #994 )
2023-12-11 20:47:11 -05:00
Andrei Betlen
1a7bf2037b
docs: Update openapi endpoint names
2023-11-24 03:39:29 -05:00
Andrei Betlen
128dc4731f
Fix #569
2023-11-21 04:39:05 -05:00
Andrei Betlen
7a3f87846b
Format
2023-11-21 04:02:20 -05:00
Andrei Betlen
07e47f55ba
Add support for logit_bias outside of server api. Closes #827
2023-11-21 03:59:46 -05:00
TK-Master
b8438f70b5
Added support for min_p ( #921 )
...
* Added support for min_p
My small contribution to this great project.
Ref: https://github.com/ggerganov/llama.cpp/pull/3841
Closes: https://github.com/abetlen/llama-cpp-python/issues/911
* Fix for negative temp (sample_softmax)
2023-11-20 23:21:33 -05:00
Andrei Betlen
e7962d2c73
Fix: default max_tokens matches openai api (16 for completion, max length for chat completion)
2023-11-10 02:49:27 -05:00
Andrei Betlen
ca4cb88351
Fix destructor NoneType is not callable error
2023-11-08 11:05:45 -05:00
Andrei Betlen
b30b9c338b
Add JSON mode support. Closes #881
2023-11-08 00:07:16 -05:00
Andrei Betlen
86aeb9f3a1
Add seed parameter support for completion and chat_completion requests. Closes #884
2023-11-07 23:37:28 -05:00
Damian Stewart
aab74f0b2b
Multimodal Support (Llava 1.5) ( #821 )
...
* llava v1.5 integration
* Point llama.cpp to fork
* Add llava shared library target
* Fix type
* Update llama.cpp
* Add llava api
* Revert changes to llama and llama_cpp
* Update llava example
* Add types for new gpt-4-vision-preview api
* Fix typo
* Update llama.cpp
* Update llama_types to match OpenAI v1 API
* Update ChatCompletionFunction type
* Reorder request parameters
* More API type fixes
* Even More Type Updates
* Add parameter for custom chat_handler to Llama class
* Fix circular import
* Convert to absolute imports
* Fix
* Fix pydantic Jsontype bug
* Accept list of prompt tokens in create_completion
* Add llava1.5 chat handler
* Add Multimodal notebook
* Clean up examples
* Add server docs
---------
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-11-07 22:48:51 -05:00
Andrei Betlen
df9362eeea
Update llama.cpp
2023-11-03 11:34:50 -04:00
Andrei
3af7b21ff1
Add functionary support ( #784 )
...
* Add common grammars and json-schema-to-grammar utility function from llama.cpp
* Pass functions to format function
* Add basic functionary formatting
* Add LlamaChatHandler for more complex chat use cases
* Add function calling example notebook
* Add support for regular chat completions alongside function calling
2023-11-03 02:12:14 -04:00
Andrei Betlen
fa83cc5f9c
Update llama.cpp
...
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
Update llama.cpp
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
2023-11-02 14:28:15 -04:00
Antoine Lizee
4d4e0f11e2
fix: tokenization of special characters: ( #850 )
...
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
2023-11-02 14:28:14 -04:00
David Ponce
3fc9147218
Iterate over tokens that should be biased rather than the entire vocabulary. ( #851 )
2023-11-01 18:53:47 -04:00
Daniel Thuerck
5f8f369d1b
Pass-Through grammar parameter in web server. ( #855 ) Closes #778
2023-11-01 18:51:12 -04:00
Xiaoyu Kevin Hu
a315128d66
update value check for n_gpu_layers field ( #826 )
2023-10-18 18:25:25 -04:00
Andrei Betlen
d6a130a052
Print traceback on server error
2023-10-10 15:56:04 -04:00
Andrei Betlen
5ef5280ef9
Log server exceptions to stdout
2023-09-30 19:13:36 -04:00
Andrei Betlen
d9bce17794
Update server params
2023-09-29 19:59:12 -04:00
Viacheslav/Slava Tradunsky
3d5e5b1c04
Adds openai-processing-ms response header ( #748 )
2023-09-25 13:55:58 -04:00
Andrei Betlen
b047b3034e
Remove confusing helpstring from server cli args. Closes #719
2023-09-15 14:09:43 -04:00
Andrei Betlen
0449d29b9f
Fix boolean env vars and cli arguments
2023-09-13 23:09:57 -04:00
earonesty
58a6e42cc0
Update app.py ( #705 )
2023-09-13 23:01:34 -04:00
Andrei Betlen
f4090a0bb2
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
2023-09-13 23:00:43 -04:00
Andrei Betlen
c999325e8e
Fix boolean cli flags
2023-09-13 22:56:10 -04:00
Andrei Betlen
4daf77e546
Format
2023-09-13 21:23:23 -04:00
Andrei Betlen
2920c4bf7e
Update server params. Added lora_base, lora_path, low_vram, and main_gpu. Removed rms_norm_eps and n_gqa (deprecated in llama.cpp)
2023-09-13 21:23:13 -04:00
Andrei Betlen
c4c440ba2d
Fix tensor_split cli option
2023-09-13 20:00:42 -04:00
Andrei Betlen
759405c84b
Fix issue with Literal and Optional cli arguments not working. Closes #702
2023-09-13 18:06:12 -04:00
Devrim
da9df78db0
Add X-Request-ID request header for mirroring custom IDs. ( #703 )
2023-09-13 16:18:31 -04:00
Andrei Betlen
1910793f56
Merge branch 'main' into v0.2-wip
2023-09-12 16:43:32 -04:00
Andrei Betlen
5de8009706
Add copilot-codex completions endpoint for drop-in copilot usage
2023-08-25 17:49:14 -04:00
Andrei Betlen
cf405f6764
Merge branch 'main' into v0.2-wip
2023-08-24 00:30:51 -04:00
Andrei Betlen
d015bdb4f8
Add mul_mat_q option
2023-08-08 14:35:06 -04:00
Andrei Betlen
343480364f
Merge branch 'main' into v0.2-wip
2023-07-24 15:26:08 -04:00
Andrei Betlen
11dd2bf382
Add temporary rms_norm_eps parameter
2023-07-24 14:09:24 -04:00
Andrei Betlen
0538ba1dab
Merge branch 'main' into v0.2-wip
2023-07-20 19:06:26 -04:00
Andrei Betlen
28a111704b
Fix compatibility with older python versions
2023-07-20 18:52:10 -04:00
Andrei
365d9a4367
Merge pull request #481 from c0sogi/main
...
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Andrei Betlen
0b121a7456
Format
2023-07-19 03:48:27 -04:00
Andrei Betlen
b43917c144
Add functions parameters
2023-07-19 03:48:20 -04:00
Andrei Betlen
19ba9d3845
Use numpy arrays for logits_processors and stopping_criteria. Closes #491
2023-07-18 19:27:41 -04:00
shutup
5ed8bf132f
expose RoPE param to server start
2023-07-18 16:34:36 +08:00
c0sogi
1551ba10bd
Added RouteErrorHandler
for server
2023-07-16 14:57:39 +09:00
Andrei Betlen
118b7f6d5c
fix: tensor_split should be optional list
2023-07-14 16:52:48 -04:00
Shouyi Wang
579f526246
Resolve merge conflicts
2023-07-14 14:37:01 +10:00
Andrei Betlen
de4cc5a233
bugfix: pydantic v2 fields
2023-07-13 23:25:12 -04:00