Commit graph

265 commits

Author SHA1 Message Date
Andrei Betlen
17d4271b04 Fix logprobs for completions and implement for streaming logprobs. 2023-05-19 02:20:27 -04:00
Andrei Betlen
a634a2453b Allow first logprob token to be null to match openai api 2023-05-19 02:04:57 -04:00
Andrei Betlen
dc39cc0fa4 Use server sent events function for streaming completion 2023-05-19 02:04:30 -04:00
Andrei Betlen
f0ec6e615e Stream tokens instead of text chunks 2023-05-18 11:35:59 -04:00
Andrei Betlen
21d8f5fa9f Remove unnused union 2023-05-18 11:35:15 -04:00
Andrei Betlen
61d58e7b35 Check for CUDA_PATH before adding 2023-05-17 15:26:38 -04:00
Andrei Betlen
7c95895626 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-05-17 15:19:32 -04:00
Aneesh Joy
e9794f91f2
Fixd CUBLAS dll load issue in Windows 2023-05-17 18:04:58 +01:00
Andrei Betlen
4f342795e5 Update token checks 2023-05-17 03:35:13 -04:00
Andrei Betlen
f5c2f998ab Format 2023-05-17 02:00:39 -04:00
Andrei Betlen
d28b753ed2 Implement penalize_nl 2023-05-17 01:53:26 -04:00
Andrei Betlen
f11e2a781c Fix last_n_tokens_size 2023-05-17 01:42:51 -04:00
Andrei Betlen
7e55244540 Fix top_k value. Closes #220 2023-05-17 01:41:42 -04:00
Andrei Betlen
a7c9e38287 Update variable name 2023-05-16 18:07:25 -04:00
Andrei Betlen
a3352923c7 Add model_alias option to override model_path in completions. Closes #39 2023-05-16 17:22:00 -04:00
Andrei Betlen
a65125c0bd Add sampling defaults for generate 2023-05-16 09:35:50 -04:00
Andrei Betlen
cbac19bf24 Add winmode arg only on windows if python version supports it 2023-05-15 09:15:01 -04:00
Andrei Betlen
c804efe3f0 Fix obscure Wndows DLL issue. Closes #208 2023-05-14 22:08:11 -04:00
Andrei Betlen
cdf59768f5 Update llama.cpp 2023-05-14 00:04:22 -04:00
Andrei Betlen
7a536e86c2 Allow model to tokenize strings longer than context length and set add_bos. Closes #92 2023-05-12 14:28:22 -04:00
Andrei Betlen
8740ddc58e Only support generating one prompt at a time. 2023-05-12 07:21:46 -04:00
Andrei Betlen
8895b9002a Revert "llama_cpp server: prompt is a string". Closes #187
This reverts commit b9098b0ef7.
2023-05-12 07:16:57 -04:00
Andrei Betlen
7be584fe82 Add missing tfs_z paramter 2023-05-11 21:56:19 -04:00
Andrei Betlen
cdeaded251 Bugfix: Ensure logs are printed when streaming 2023-05-10 16:12:17 -04:00
Lucas Doyle
02e8a018ae llama_cpp server: document presence_penalty and frequency_penalty, mark as supported 2023-05-09 16:25:00 -07:00
Andrei Betlen
d957422bf4 Implement sampling as in llama.cpp main example 2023-05-08 21:21:25 -04:00
Andrei Betlen
93a9019bb1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into Maximilian-Winter/main 2023-05-08 19:57:09 -04:00
Andrei Betlen
82d138fe54 Fix: default repeat_penalty 2023-05-08 18:49:11 -04:00
Andrei Betlen
29f094bbcf Bugfix: not falling back to environment variables when default is value is set. 2023-05-08 14:46:25 -04:00
Andrei Betlen
0d6c60097a Show default value when --help is called 2023-05-08 14:21:15 -04:00
Andrei Betlen
022e9ebcb8 Use environment variable if parsed cli arg is None 2023-05-08 14:20:53 -04:00
Andrei Betlen
0d751a69a7 Set repeat_penalty to 0 by default 2023-05-08 01:50:43 -04:00
Andrei Betlen
65d9cc050c Add openai frequency and presence penalty parameters. Closes #169 2023-05-08 01:30:18 -04:00
Andrei Betlen
a0b61ea2a7 Bugfix for models endpoint 2023-05-07 20:17:52 -04:00
Andrei Betlen
e72f58614b Change pointer to lower overhead byref 2023-05-07 20:01:34 -04:00
Andrei Betlen
14da46f16e Added cache size to settins object. 2023-05-07 19:33:17 -04:00
Andrei Betlen
0e94a70de1 Add in-memory longest prefix cache. Closes #158 2023-05-07 19:31:26 -04:00
Andrei Betlen
8dfde63255 Fix return type 2023-05-07 19:30:14 -04:00
Andrei Betlen
2753b85321 Format 2023-05-07 13:19:56 -04:00
Andrei Betlen
627811ea83 Add verbose flag to server 2023-05-07 05:09:10 -04:00
Andrei Betlen
3fbda71790 Fix mlock_supported and mmap_supported return type 2023-05-07 03:04:22 -04:00
Andrei Betlen
5a3413eee3 Update cpu_count 2023-05-07 03:03:57 -04:00
Andrei Betlen
1a00e452ea Update settings fields and defaults 2023-05-07 02:52:20 -04:00
Andrei Betlen
86753976c4 Revert "llama_cpp server: delete some ignored / unused parameters"
This reverts commit b47b9549d5.
2023-05-07 02:02:34 -04:00
Andrei Betlen
c382d8f86a Revert "llama_cpp server: mark model as required"
This reverts commit e40fcb0575.
2023-05-07 02:00:22 -04:00
Andrei Betlen
d8fddcce73 Merge branch 'main' of github.com:abetlen/llama_cpp_python into better-server-params-and-fields 2023-05-07 01:54:00 -04:00
Andrei Betlen
7c3743fe5f Update llama.cpp 2023-05-07 00:12:47 -04:00
Andrei Betlen
bc853e3742 Fix type for eval_logits in LlamaState object 2023-05-06 21:32:50 -04:00
Maximilian Winter
515d9bde7e Fixed somethings and activated cublas 2023-05-06 23:40:19 +02:00
Maximilian Winter
aa203a0d65 Added mirostat sampling to the high level API. 2023-05-06 22:47:47 +02:00
Andrei Betlen
98bbd1c6a8 Fix eval logits type 2023-05-05 14:23:14 -04:00
Andrei Betlen
b5f3e74627 Add return type annotations for embeddings and logits 2023-05-05 14:22:55 -04:00
Andrei Betlen
3e28e0e50c Fix: runtime type errors 2023-05-05 14:12:26 -04:00
Andrei Betlen
e24c3d7447 Prefer explicit imports 2023-05-05 14:05:31 -04:00
Andrei Betlen
40501435c1 Fix: types 2023-05-05 14:04:12 -04:00
Andrei Betlen
66e28eb548 Fix temperature bug 2023-05-05 14:00:41 -04:00
Andrei Betlen
6702d2abfd Fix candidates type 2023-05-05 14:00:30 -04:00
Andrei Betlen
5e7ddfc3d6 Fix llama_cpp types 2023-05-05 13:54:22 -04:00
Andrei Betlen
b6a9a0b6ba Add types for all low-level api functions 2023-05-05 12:22:27 -04:00
Andrei Betlen
5be0efa5f8 Cache should raise KeyError when key is missing 2023-05-05 12:21:49 -04:00
Andrei Betlen
24fc38754b Add cli options to server. Closes #37 2023-05-05 12:08:28 -04:00
Andrei Betlen
853dc711cc Format 2023-05-04 21:58:36 -04:00
Andrei Betlen
97c6372350 Rewind model to longest prefix. 2023-05-04 21:58:27 -04:00
Andrei Betlen
329297fafb Bugfix: Missing logits_to_logprobs 2023-05-04 12:18:40 -04:00
Lucas Doyle
3008a954c1 Merge branch 'main' of github.com:abetlen/llama-cpp-python into better-server-params-and-fields 2023-05-03 13:10:03 -07:00
Andrei Betlen
9e5b6d675a Improve logging messages 2023-05-03 10:28:10 -04:00
Andrei Betlen
43f2907e3a Support smaller state sizes 2023-05-03 09:33:50 -04:00
Andrei Betlen
1d47cce222 Update llama.cpp 2023-05-03 09:33:30 -04:00
Lucas Doyle
b9098b0ef7 llama_cpp server: prompt is a string
Not sure why this union type was here but taking a look at llama.py, prompt is only ever processed as a string for completion

This was breaking types when generating an openapi client
2023-05-02 14:47:07 -07:00
Matt Hoffner
f97ff3c5bb
Update llama_cpp.py 2023-05-01 20:40:06 -07:00
Andrei
7ab08b8d10
Merge branch 'main' into better-server-params-and-fields 2023-05-01 22:45:57 -04:00
Andrei Betlen
9eafc4c49a Refactor server to use factory 2023-05-01 22:38:46 -04:00
Andrei Betlen
dd9ad1c759 Formatting 2023-05-01 21:51:16 -04:00
Lucas Doyle
dbbfc4ba2f llama_cpp server: fix to ChatCompletionRequestMessage
When I generate a client, it breaks because it fails to process the schema of ChatCompletionRequestMessage

These fix that:
- I think `Union[Literal["user"], Literal["channel"], ...]` is the same as Literal["user", "channel", ...]
- Turns out default value `Literal["user"]` isn't JSON serializable, so replace with "user"
2023-05-01 15:38:19 -07:00
Lucas Doyle
fa2a61e065 llama_cpp server: fields for the embedding endpoint 2023-05-01 15:38:19 -07:00
Lucas Doyle
8dcbf65a45 llama_cpp server: define fields for chat completions
Slight refactor for common fields shared between completion and chat completion
2023-05-01 15:38:19 -07:00
Lucas Doyle
978b6daf93 llama_cpp server: add some more information to fields for completions 2023-05-01 15:38:19 -07:00
Lucas Doyle
a5aa6c1478 llama_cpp server: add missing top_k param to CreateChatCompletionRequest
`llama.create_chat_completion` definitely has a `top_k` argument, but its missing from `CreateChatCompletionRequest`. decision: add it
2023-05-01 15:38:19 -07:00
Lucas Doyle
1e42913599 llama_cpp server: move logprobs to supported
I think this is actually supported (its in the arguments of `LLama.__call__`, which is how the completion is invoked). decision: mark as supported
2023-05-01 15:38:19 -07:00
Lucas Doyle
b47b9549d5 llama_cpp server: delete some ignored / unused parameters
`n`, `presence_penalty`, `frequency_penalty`, `best_of`, `logit_bias`, `user`: not supported, excluded from the calls into llama. decision: delete it
2023-05-01 15:38:19 -07:00
Lucas Doyle
e40fcb0575 llama_cpp server: mark model as required
`model` is ignored, but currently marked "optional"... on the one hand could mark "required" to make it explicit in case the server supports multiple llama's at the same time, but also could delete it since its ignored. decision: mark it required for the sake of openai api compatibility.

I think out of all parameters, `model` is probably the most important one for people to keep using even if its ignored for now.
2023-05-01 15:38:19 -07:00
Andrei Betlen
b6747f722e Fix logprob calculation. Fixes #134 2023-05-01 17:45:08 -04:00
Andrei Betlen
9ff9cdd7fc Fix import error 2023-05-01 15:11:15 -04:00
Andrei Betlen
350a1769e1 Update sampling api 2023-05-01 14:47:55 -04:00
Andrei Betlen
7837c3fdc7 Fix return types and import comments 2023-05-01 14:02:06 -04:00
Andrei Betlen
ccf1ed54ae Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-05-01 11:35:14 -04:00
Andrei Betlen
80184a286c Update llama.cpp 2023-05-01 10:44:28 -04:00
Lucas Doyle
efe8e6f879 llama_cpp server: slight refactor to init_llama function
Define an init_llama function that starts llama with supplied settings instead of just doing it in the global context of app.py

This allows the test to be less brittle by not needing to mess with os.environ, then importing the app
2023-04-29 11:42:23 -07:00
Lucas Doyle
6d8db9d017 tests: simple test for server module 2023-04-29 11:42:20 -07:00
Lucas Doyle
468377b0e2 llama_cpp server: app is now importable, still runnable as a module 2023-04-29 11:41:25 -07:00
Andrei
755f9fa455
Merge pull request #118 from SagsMug/main
Fix UnicodeDecodeError permanently
2023-04-29 07:19:01 -04:00
Mug
18a0c10032 Remove excessive errors="ignore" and add utf8 test 2023-04-29 12:19:22 +02:00
Andrei Betlen
ea0faabae1 Update llama.cpp 2023-04-28 15:32:43 -04:00
Mug
b7d14efc8b Python weirdness 2023-04-28 13:20:31 +02:00
Mug
eed61289b6 Dont detect off tokens, detect off detokenized utf8 2023-04-28 13:16:18 +02:00
Mug
3a98747026 One day, i'll fix off by 1 errors permanently too 2023-04-28 12:54:28 +02:00
Mug
c39547a986 Detect multi-byte responses and wait 2023-04-28 12:50:30 +02:00
Andrei Betlen
9339929f56 Update llama.cpp 2023-04-26 20:00:54 -04:00
Mug
5f81400fcb Also ignore errors on input prompts 2023-04-26 14:45:51 +02:00
Mug
be2c961bc9 Merge branch 'main' of https://github.com/abetlen/llama-cpp-python 2023-04-26 14:38:09 +02:00