Andrei Betlen
65d9cc050c
Add openai frequency and presence penalty parameters. Closes #169
2023-05-08 01:30:18 -04:00
Andrei Betlen
a0b61ea2a7
Bugfix for models endpoint
2023-05-07 20:17:52 -04:00
Andrei Betlen
e72f58614b
Change pointer to lower overhead byref
2023-05-07 20:01:34 -04:00
Andrei Betlen
14da46f16e
Added cache size to settins object.
2023-05-07 19:33:17 -04:00
Andrei Betlen
0e94a70de1
Add in-memory longest prefix cache. Closes #158
2023-05-07 19:31:26 -04:00
Andrei Betlen
8dfde63255
Fix return type
2023-05-07 19:30:14 -04:00
Andrei Betlen
2753b85321
Format
2023-05-07 13:19:56 -04:00
Andrei Betlen
627811ea83
Add verbose flag to server
2023-05-07 05:09:10 -04:00
Andrei Betlen
3fbda71790
Fix mlock_supported and mmap_supported return type
2023-05-07 03:04:22 -04:00
Andrei Betlen
5a3413eee3
Update cpu_count
2023-05-07 03:03:57 -04:00
Andrei Betlen
1a00e452ea
Update settings fields and defaults
2023-05-07 02:52:20 -04:00
Andrei Betlen
86753976c4
Revert "llama_cpp server: delete some ignored / unused parameters"
...
This reverts commit b47b9549d5
.
2023-05-07 02:02:34 -04:00
Andrei Betlen
c382d8f86a
Revert "llama_cpp server: mark model as required"
...
This reverts commit e40fcb0575
.
2023-05-07 02:00:22 -04:00
Andrei Betlen
d8fddcce73
Merge branch 'main' of github.com:abetlen/llama_cpp_python into better-server-params-and-fields
2023-05-07 01:54:00 -04:00
Andrei Betlen
7c3743fe5f
Update llama.cpp
2023-05-07 00:12:47 -04:00
Andrei Betlen
bc853e3742
Fix type for eval_logits in LlamaState object
2023-05-06 21:32:50 -04:00
Andrei Betlen
98bbd1c6a8
Fix eval logits type
2023-05-05 14:23:14 -04:00
Andrei Betlen
b5f3e74627
Add return type annotations for embeddings and logits
2023-05-05 14:22:55 -04:00
Andrei Betlen
3e28e0e50c
Fix: runtime type errors
2023-05-05 14:12:26 -04:00
Andrei Betlen
e24c3d7447
Prefer explicit imports
2023-05-05 14:05:31 -04:00
Andrei Betlen
40501435c1
Fix: types
2023-05-05 14:04:12 -04:00
Andrei Betlen
66e28eb548
Fix temperature bug
2023-05-05 14:00:41 -04:00
Andrei Betlen
6702d2abfd
Fix candidates type
2023-05-05 14:00:30 -04:00
Andrei Betlen
5e7ddfc3d6
Fix llama_cpp types
2023-05-05 13:54:22 -04:00
Andrei Betlen
b6a9a0b6ba
Add types for all low-level api functions
2023-05-05 12:22:27 -04:00
Andrei Betlen
5be0efa5f8
Cache should raise KeyError when key is missing
2023-05-05 12:21:49 -04:00
Andrei Betlen
24fc38754b
Add cli options to server. Closes #37
2023-05-05 12:08:28 -04:00
Andrei Betlen
853dc711cc
Format
2023-05-04 21:58:36 -04:00
Andrei Betlen
97c6372350
Rewind model to longest prefix.
2023-05-04 21:58:27 -04:00
Andrei Betlen
329297fafb
Bugfix: Missing logits_to_logprobs
2023-05-04 12:18:40 -04:00
Lucas Doyle
3008a954c1
Merge branch 'main' of github.com:abetlen/llama-cpp-python into better-server-params-and-fields
2023-05-03 13:10:03 -07:00
Andrei Betlen
9e5b6d675a
Improve logging messages
2023-05-03 10:28:10 -04:00
Andrei Betlen
43f2907e3a
Support smaller state sizes
2023-05-03 09:33:50 -04:00
Andrei Betlen
1d47cce222
Update llama.cpp
2023-05-03 09:33:30 -04:00
Lucas Doyle
b9098b0ef7
llama_cpp server: prompt is a string
...
Not sure why this union type was here but taking a look at llama.py, prompt is only ever processed as a string for completion
This was breaking types when generating an openapi client
2023-05-02 14:47:07 -07:00
Matt Hoffner
f97ff3c5bb
Update llama_cpp.py
2023-05-01 20:40:06 -07:00
Andrei
7ab08b8d10
Merge branch 'main' into better-server-params-and-fields
2023-05-01 22:45:57 -04:00
Andrei Betlen
9eafc4c49a
Refactor server to use factory
2023-05-01 22:38:46 -04:00
Andrei Betlen
dd9ad1c759
Formatting
2023-05-01 21:51:16 -04:00
Lucas Doyle
dbbfc4ba2f
llama_cpp server: fix to ChatCompletionRequestMessage
...
When I generate a client, it breaks because it fails to process the schema of ChatCompletionRequestMessage
These fix that:
- I think `Union[Literal["user"], Literal["channel"], ...]` is the same as Literal["user", "channel", ...]
- Turns out default value `Literal["user"]` isn't JSON serializable, so replace with "user"
2023-05-01 15:38:19 -07:00
Lucas Doyle
fa2a61e065
llama_cpp server: fields for the embedding endpoint
2023-05-01 15:38:19 -07:00
Lucas Doyle
8dcbf65a45
llama_cpp server: define fields for chat completions
...
Slight refactor for common fields shared between completion and chat completion
2023-05-01 15:38:19 -07:00
Lucas Doyle
978b6daf93
llama_cpp server: add some more information to fields for completions
2023-05-01 15:38:19 -07:00
Lucas Doyle
a5aa6c1478
llama_cpp server: add missing top_k param to CreateChatCompletionRequest
...
`llama.create_chat_completion` definitely has a `top_k` argument, but its missing from `CreateChatCompletionRequest`. decision: add it
2023-05-01 15:38:19 -07:00
Lucas Doyle
1e42913599
llama_cpp server: move logprobs to supported
...
I think this is actually supported (its in the arguments of `LLama.__call__`, which is how the completion is invoked). decision: mark as supported
2023-05-01 15:38:19 -07:00
Lucas Doyle
b47b9549d5
llama_cpp server: delete some ignored / unused parameters
...
`n`, `presence_penalty`, `frequency_penalty`, `best_of`, `logit_bias`, `user`: not supported, excluded from the calls into llama. decision: delete it
2023-05-01 15:38:19 -07:00
Lucas Doyle
e40fcb0575
llama_cpp server: mark model as required
...
`model` is ignored, but currently marked "optional"... on the one hand could mark "required" to make it explicit in case the server supports multiple llama's at the same time, but also could delete it since its ignored. decision: mark it required for the sake of openai api compatibility.
I think out of all parameters, `model` is probably the most important one for people to keep using even if its ignored for now.
2023-05-01 15:38:19 -07:00
Andrei Betlen
b6747f722e
Fix logprob calculation. Fixes #134
2023-05-01 17:45:08 -04:00
Andrei Betlen
9ff9cdd7fc
Fix import error
2023-05-01 15:11:15 -04:00
Andrei Betlen
350a1769e1
Update sampling api
2023-05-01 14:47:55 -04:00