Maximilian-Winter
|
da463e6c8c
|
Added types to logit processor list and stop criteria list
|
2023-05-25 09:07:16 +02:00 |
|
Maximilian-Winter
|
c05fcdf42f
|
Fixed none value of logits processors.
|
2023-05-24 22:02:06 +02:00 |
|
Maximilian-Winter
|
5bb780d455
|
Implemented logit processors and stop criteria's
|
2023-05-24 21:55:44 +02:00 |
|
Andrei Betlen
|
0adb9ec37a
|
Use model_name and index in response
|
2023-05-21 21:30:03 -04:00 |
|
Andrei Betlen
|
922b5b2bfd
|
Merge branch 'main' into server-embedding
|
2023-05-21 21:21:38 -04:00 |
|
Andrei Betlen
|
cd102e9da1
|
Cache shared library function calls for static tokens
|
2023-05-21 19:18:56 -04:00 |
|
Andrei Betlen
|
b895511cca
|
Fix penalize_nl
|
2023-05-21 18:38:06 -04:00 |
|
Andrei Betlen
|
03e2947b03
|
Fix unnecessary memory allocation while sampling
|
2023-05-21 18:36:34 -04:00 |
|
Andrei Betlen
|
fafe47114c
|
Update llama.cpp
|
2023-05-21 17:47:21 -04:00 |
|
Andrei Betlen
|
76b1d2cd20
|
Change properties to functions to match token functions
|
2023-05-20 08:24:06 -04:00 |
|
Andrei Betlen
|
a7ba85834f
|
Add n_ctx, n_vocab, and n_embd properties
|
2023-05-20 08:13:41 -04:00 |
|
Simon Chabot
|
e783f1c191
|
feat: make embedding support list of string as input
makes the /v1/embedding route similar to OpenAI api.
|
2023-05-20 01:23:32 +02:00 |
|
Andrei Betlen
|
01a010be52
|
Fix llama_cpp and Llama type signatures. Closes #221
|
2023-05-19 11:59:33 -04:00 |
|
Andrei Betlen
|
a8cd169251
|
Bugfix: Stop sequences can be strings
|
2023-05-19 03:15:08 -04:00 |
|
Andrei Betlen
|
17d4271b04
|
Fix logprobs for completions and implement for streaming logprobs.
|
2023-05-19 02:20:27 -04:00 |
|
Andrei Betlen
|
a634a2453b
|
Allow first logprob token to be null to match openai api
|
2023-05-19 02:04:57 -04:00 |
|
Andrei Betlen
|
dc39cc0fa4
|
Use server sent events function for streaming completion
|
2023-05-19 02:04:30 -04:00 |
|
Andrei Betlen
|
f0ec6e615e
|
Stream tokens instead of text chunks
|
2023-05-18 11:35:59 -04:00 |
|
Andrei Betlen
|
21d8f5fa9f
|
Remove unnused union
|
2023-05-18 11:35:15 -04:00 |
|
Andrei Betlen
|
61d58e7b35
|
Check for CUDA_PATH before adding
|
2023-05-17 15:26:38 -04:00 |
|
Andrei Betlen
|
7c95895626
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-05-17 15:19:32 -04:00 |
|
Aneesh Joy
|
e9794f91f2
|
Fixd CUBLAS dll load issue in Windows
|
2023-05-17 18:04:58 +01:00 |
|
Andrei Betlen
|
4f342795e5
|
Update token checks
|
2023-05-17 03:35:13 -04:00 |
|
Andrei Betlen
|
f5c2f998ab
|
Format
|
2023-05-17 02:00:39 -04:00 |
|
Andrei Betlen
|
d28b753ed2
|
Implement penalize_nl
|
2023-05-17 01:53:26 -04:00 |
|
Andrei Betlen
|
f11e2a781c
|
Fix last_n_tokens_size
|
2023-05-17 01:42:51 -04:00 |
|
Andrei Betlen
|
7e55244540
|
Fix top_k value. Closes #220
|
2023-05-17 01:41:42 -04:00 |
|
Andrei Betlen
|
a7c9e38287
|
Update variable name
|
2023-05-16 18:07:25 -04:00 |
|
Andrei Betlen
|
a3352923c7
|
Add model_alias option to override model_path in completions. Closes #39
|
2023-05-16 17:22:00 -04:00 |
|
Andrei Betlen
|
a65125c0bd
|
Add sampling defaults for generate
|
2023-05-16 09:35:50 -04:00 |
|
Andrei Betlen
|
cbac19bf24
|
Add winmode arg only on windows if python version supports it
|
2023-05-15 09:15:01 -04:00 |
|
Andrei Betlen
|
c804efe3f0
|
Fix obscure Wndows DLL issue. Closes #208
|
2023-05-14 22:08:11 -04:00 |
|
Andrei Betlen
|
cdf59768f5
|
Update llama.cpp
|
2023-05-14 00:04:22 -04:00 |
|
Andrei Betlen
|
7a536e86c2
|
Allow model to tokenize strings longer than context length and set add_bos. Closes #92
|
2023-05-12 14:28:22 -04:00 |
|
Andrei Betlen
|
8740ddc58e
|
Only support generating one prompt at a time.
|
2023-05-12 07:21:46 -04:00 |
|
Andrei Betlen
|
8895b9002a
|
Revert "llama_cpp server: prompt is a string". Closes #187
This reverts commit b9098b0ef7 .
|
2023-05-12 07:16:57 -04:00 |
|
Andrei Betlen
|
7be584fe82
|
Add missing tfs_z paramter
|
2023-05-11 21:56:19 -04:00 |
|
Andrei Betlen
|
cdeaded251
|
Bugfix: Ensure logs are printed when streaming
|
2023-05-10 16:12:17 -04:00 |
|
Lucas Doyle
|
02e8a018ae
|
llama_cpp server: document presence_penalty and frequency_penalty, mark as supported
|
2023-05-09 16:25:00 -07:00 |
|
Andrei Betlen
|
d957422bf4
|
Implement sampling as in llama.cpp main example
|
2023-05-08 21:21:25 -04:00 |
|
Andrei Betlen
|
93a9019bb1
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into Maximilian-Winter/main
|
2023-05-08 19:57:09 -04:00 |
|
Andrei Betlen
|
82d138fe54
|
Fix: default repeat_penalty
|
2023-05-08 18:49:11 -04:00 |
|
Andrei Betlen
|
29f094bbcf
|
Bugfix: not falling back to environment variables when default is value is set.
|
2023-05-08 14:46:25 -04:00 |
|
Andrei Betlen
|
0d6c60097a
|
Show default value when --help is called
|
2023-05-08 14:21:15 -04:00 |
|
Andrei Betlen
|
022e9ebcb8
|
Use environment variable if parsed cli arg is None
|
2023-05-08 14:20:53 -04:00 |
|
Andrei Betlen
|
0d751a69a7
|
Set repeat_penalty to 0 by default
|
2023-05-08 01:50:43 -04:00 |
|
Andrei Betlen
|
65d9cc050c
|
Add openai frequency and presence penalty parameters. Closes #169
|
2023-05-08 01:30:18 -04:00 |
|
Andrei Betlen
|
a0b61ea2a7
|
Bugfix for models endpoint
|
2023-05-07 20:17:52 -04:00 |
|
Andrei Betlen
|
e72f58614b
|
Change pointer to lower overhead byref
|
2023-05-07 20:01:34 -04:00 |
|
Andrei Betlen
|
14da46f16e
|
Added cache size to settins object.
|
2023-05-07 19:33:17 -04:00 |
|