Andrei Betlen
|
84e313bd6e
|
Align dtype to match c structs
|
2023-05-26 22:02:16 -04:00 |
|
Andrei Betlen
|
66bcb8d70d
|
Merge branch 'main' into add-numpy-support
|
2023-05-26 20:25:03 -04:00 |
|
Andrei Betlen
|
8f35bddd7e
|
Fix stop sequence performance bug.
|
2023-05-26 20:23:49 -04:00 |
|
Andrei Betlen
|
7fc7bc30e7
|
Remove usage of eval_tokens for cache check
|
2023-05-26 20:12:05 -04:00 |
|
Andrei Betlen
|
fe331ec589
|
Replace eval_logits and eval_tokens with numpy arrays
|
2023-05-26 20:03:31 -04:00 |
|
Andrei Betlen
|
8eb9769f78
|
Add support for numpy
|
2023-05-26 16:12:45 -04:00 |
|
Andrei Betlen
|
4c1b7f7a76
|
Bugfix for logits_processor and stopping_criteria
|
2023-05-26 10:25:28 -04:00 |
|
Andrei Betlen
|
433a2e3e8a
|
Add extra logits_processor and stopping_criteria
|
2023-05-26 03:13:24 -04:00 |
|
Andrei Betlen
|
f74b90ed67
|
Fix streaming hang on last token when cache is on.
|
2023-05-26 03:03:01 -04:00 |
|
Andrei Betlen
|
5be8354e11
|
Added tokenizer
|
2023-05-26 03:00:51 -04:00 |
|
Andrei Betlen
|
8fa2ef1959
|
Format
|
2023-05-26 03:00:35 -04:00 |
|
Andrei Betlen
|
6bd1075291
|
Merge branch 'Maximilian-Winter/main' into main
|
2023-05-26 02:56:11 -04:00 |
|
Andrei Betlen
|
ca01f98e09
|
Add LlamaTokenizer class
|
2023-05-25 14:11:33 -04:00 |
|
Andrei Betlen
|
1d247e0f35
|
Add StoppingCriteria and LogitsProcessor to generate to match huggingface API
|
2023-05-25 14:04:54 -04:00 |
|
Maximilian-Winter
|
c2585b6889
|
Fixed list elements typing
|
2023-05-25 10:54:08 +02:00 |
|
Maximilian-Winter
|
da463e6c8c
|
Added types to logit processor list and stop criteria list
|
2023-05-25 09:07:16 +02:00 |
|
Maximilian-Winter
|
c05fcdf42f
|
Fixed none value of logits processors.
|
2023-05-24 22:02:06 +02:00 |
|
Maximilian-Winter
|
5bb780d455
|
Implemented logit processors and stop criteria's
|
2023-05-24 21:55:44 +02:00 |
|
Andrei Betlen
|
fab064ded9
|
Remove unnecessary ffi calls
|
2023-05-23 17:56:21 -04:00 |
|
Andrei Betlen
|
0adb9ec37a
|
Use model_name and index in response
|
2023-05-21 21:30:03 -04:00 |
|
Andrei Betlen
|
922b5b2bfd
|
Merge branch 'main' into server-embedding
|
2023-05-21 21:21:38 -04:00 |
|
Andrei Betlen
|
cd102e9da1
|
Cache shared library function calls for static tokens
|
2023-05-21 19:18:56 -04:00 |
|
Andrei Betlen
|
b895511cca
|
Fix penalize_nl
|
2023-05-21 18:38:06 -04:00 |
|
Andrei Betlen
|
03e2947b03
|
Fix unnecessary memory allocation while sampling
|
2023-05-21 18:36:34 -04:00 |
|
Andrei Betlen
|
fafe47114c
|
Update llama.cpp
|
2023-05-21 17:47:21 -04:00 |
|
Andrei Betlen
|
76b1d2cd20
|
Change properties to functions to match token functions
|
2023-05-20 08:24:06 -04:00 |
|
Andrei Betlen
|
a7ba85834f
|
Add n_ctx, n_vocab, and n_embd properties
|
2023-05-20 08:13:41 -04:00 |
|
Simon Chabot
|
e783f1c191
|
feat: make embedding support list of string as input
makes the /v1/embedding route similar to OpenAI api.
|
2023-05-20 01:23:32 +02:00 |
|
Andrei Betlen
|
01a010be52
|
Fix llama_cpp and Llama type signatures. Closes #221
|
2023-05-19 11:59:33 -04:00 |
|
Andrei Betlen
|
a8cd169251
|
Bugfix: Stop sequences can be strings
|
2023-05-19 03:15:08 -04:00 |
|
Andrei Betlen
|
17d4271b04
|
Fix logprobs for completions and implement for streaming logprobs.
|
2023-05-19 02:20:27 -04:00 |
|
Andrei Betlen
|
a634a2453b
|
Allow first logprob token to be null to match openai api
|
2023-05-19 02:04:57 -04:00 |
|
Andrei Betlen
|
dc39cc0fa4
|
Use server sent events function for streaming completion
|
2023-05-19 02:04:30 -04:00 |
|
Andrei Betlen
|
f0ec6e615e
|
Stream tokens instead of text chunks
|
2023-05-18 11:35:59 -04:00 |
|
Andrei Betlen
|
21d8f5fa9f
|
Remove unnused union
|
2023-05-18 11:35:15 -04:00 |
|
Andrei Betlen
|
61d58e7b35
|
Check for CUDA_PATH before adding
|
2023-05-17 15:26:38 -04:00 |
|
Andrei Betlen
|
7c95895626
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-05-17 15:19:32 -04:00 |
|
Aneesh Joy
|
e9794f91f2
|
Fixd CUBLAS dll load issue in Windows
|
2023-05-17 18:04:58 +01:00 |
|
Andrei Betlen
|
4f342795e5
|
Update token checks
|
2023-05-17 03:35:13 -04:00 |
|
Andrei Betlen
|
f5c2f998ab
|
Format
|
2023-05-17 02:00:39 -04:00 |
|
Andrei Betlen
|
d28b753ed2
|
Implement penalize_nl
|
2023-05-17 01:53:26 -04:00 |
|
Andrei Betlen
|
f11e2a781c
|
Fix last_n_tokens_size
|
2023-05-17 01:42:51 -04:00 |
|
Andrei Betlen
|
7e55244540
|
Fix top_k value. Closes #220
|
2023-05-17 01:41:42 -04:00 |
|
Andrei Betlen
|
a7c9e38287
|
Update variable name
|
2023-05-16 18:07:25 -04:00 |
|
Andrei Betlen
|
a3352923c7
|
Add model_alias option to override model_path in completions. Closes #39
|
2023-05-16 17:22:00 -04:00 |
|
Andrei Betlen
|
a65125c0bd
|
Add sampling defaults for generate
|
2023-05-16 09:35:50 -04:00 |
|
Andrei Betlen
|
cbac19bf24
|
Add winmode arg only on windows if python version supports it
|
2023-05-15 09:15:01 -04:00 |
|
Andrei Betlen
|
c804efe3f0
|
Fix obscure Wndows DLL issue. Closes #208
|
2023-05-14 22:08:11 -04:00 |
|
Andrei Betlen
|
cdf59768f5
|
Update llama.cpp
|
2023-05-14 00:04:22 -04:00 |
|
Andrei Betlen
|
7a536e86c2
|
Allow model to tokenize strings longer than context length and set add_bos. Closes #92
|
2023-05-12 14:28:22 -04:00 |
|