Andrei Betlen
bf64752535
chore: Bump version
2024-03-18 11:37:30 -04:00
Jeffrey Fong
8a60c7bc8c
fix: Fix and optimize functionary chat handler ( #1282 )
...
* fix functionary chat logic
* further fixes
---------
Co-authored-by: Andrei <abetlen@gmail.com>
2024-03-18 10:40:57 -04:00
Andrei Betlen
8d298b4750
feat: Update llama.cpp
2024-03-18 10:26:36 -04:00
Andrei Betlen
6eb25231e4
feat: Update llama.cpp
2024-03-15 12:58:45 -04:00
Andrei Betlen
20e6815252
fix: json mode
2024-03-15 12:58:34 -04:00
Andrei Betlen
1a9b8af2dd
feat: Update llama.cpp
2024-03-14 11:46:48 -04:00
Andrei Betlen
4084aabe86
fix: set default pooling type to unspecified
2024-03-14 10:04:57 -04:00
Andrei Betlen
d318cc8b83
fix: Set default pooling_type to mean, check for null pointer.
2024-03-14 09:17:41 -04:00
Andrei Betlen
dd0ee56217
feat: Update llama.cpp
2024-03-13 15:57:35 -04:00
Andrei Betlen
08e910f7a7
feat: Update llama.cpp
2024-03-10 23:45:05 -04:00
Andrei Betlen
a7281994d8
chore: Bump version
2024-03-08 21:14:44 -05:00
Andrei Betlen
919fca9f2b
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into main
2024-03-08 21:10:56 -05:00
Andrei Betlen
d02a9cf16f
Fixed json strings grammar by blacklisting character control set. Closes #1259
2024-03-08 21:10:53 -05:00
Felipe Lorenz
c139f8b5d5
feat: Add endpoints for tokenize, detokenize and count tokens ( #1136 )
...
* Add endpoint to count tokens
* Add tokenize and detokenize endpoints
* Change response key to tokens for tokenize endpoint
* Fix dependency bug
* Cleanup
* Remove example added by mistake
* Move tokenize, detokenize, and count to Extras namespace. Tag existing endpoints
---------
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2024-03-08 21:09:00 -05:00
Kevin Cao
1f3156d4f2
fix: Check for existence of clip model path ( #1264 )
2024-03-08 21:00:10 -05:00
Douglas Hanley
2811014bae
feat: Switch embed to llama_get_embeddings_seq ( #1263 )
...
* switch to llama_get_embeddings_seq
* Remove duplicate definition of llama_get_embeddings_seq
Co-authored-by: Andrei <abetlen@gmail.com>
---------
Co-authored-by: Andrei <abetlen@gmail.com>
2024-03-08 20:59:35 -05:00
Andrei Betlen
40c6b54f68
feat: Update llama.cpp
2024-03-08 20:58:50 -05:00
Andrei Betlen
93dc56ace8
Update llama.cpp
2024-03-06 01:32:00 -05:00
Andrei Betlen
87a6e5797e
feat: Update llama.cpp
2024-03-03 11:27:04 -05:00
Andrei Betlen
13177aae0f
chore: Bump version
2024-03-02 22:46:40 -05:00
Kenneth Hoste
663659f730
docs: fix small typo in README: 'model know how' -> 'model knows how' ( #1244 )
...
Co-authored-by: Andrei <abetlen@gmail.com>
2024-03-02 22:20:41 -05:00
Andrei Betlen
0e70984fb6
feat: Update llama.cpp
2024-03-02 22:20:04 -05:00
Andrei Betlen
d5df431278
chore: Bump version
2024-03-01 13:15:16 -05:00
Andrei Betlen
97aa3a153d
docs: Add information re: auto chat formats. Closes #1236
2024-03-01 13:10:25 -05:00
Andrei Betlen
f062a7f51d
feat: Update llama.cpp
2024-03-01 12:57:16 -05:00
Douglas Hanley
cf1fdd8a9a
docs: fix typo in README.md embeddings example. ( #1232 )
2024-02-29 13:55:50 -05:00
Andrei Betlen
8c71725d53
fix: Remove deprecated cfg sampling functions
2024-02-28 14:37:07 -05:00
Andrei Betlen
727d60c28a
misc: Format
2024-02-28 14:27:40 -05:00
Andrei Betlen
0d37ce52b1
feat: Update llama.cpp
2024-02-28 14:27:16 -05:00
Andrei Betlen
ffcd4b2636
chore: Bump version
2024-02-28 01:38:32 -05:00
Sigbjørn Skjæret
c36ab15e68
fix: eos/bos_token set correctly for Jinja2ChatFormatter and automatic chat formatter ( #1230 )
...
The token strings were not correctly retrieved (empty).
2024-02-28 01:30:31 -05:00
Andrei Betlen
fea33c9b94
feat: Update llama.cpp
2024-02-27 12:22:17 -05:00
Andrei
4d574bd765
feat(server): Add support for pulling models from Huggingface Hub ( #1222 )
...
* Basic support for hf pull on server
* Add hf_model_repo_id setting
* Update README
2024-02-26 14:35:08 -05:00
Andrei Betlen
b3e358dee4
docs: Add example of local image loading to README
2024-02-26 11:58:33 -05:00
Andrei Betlen
afe1e445c9
chore: Bump version
2024-02-26 11:43:24 -05:00
Andrei Betlen
9558ce7878
feat: Update llama.cpp
2024-02-26 11:40:58 -05:00
Andrei Betlen
a57d5dff86
feat: Update llama.cpp
2024-02-26 11:37:43 -05:00
Andrei Betlen
79c649c2d1
docs: Update multimodal example
2024-02-26 11:34:45 -05:00
Andrei Betlen
bf315ee7a9
docs: Update multimodal example
2024-02-26 11:32:11 -05:00
Andrei Betlen
dbaba3059d
fix: positional arguments only for low-level api
2024-02-26 11:31:11 -05:00
Andrei Betlen
78e536dcfe
fix: typo
2024-02-26 11:14:26 -05:00
Andrei Betlen
44558cbd7a
misc: llava_cpp use ctypes function decorator for binding
2024-02-26 11:07:33 -05:00
Andrei Betlen
8383a9e562
fix: llava this function takes at least 4 arguments (0 given)
2024-02-26 11:03:20 -05:00
Andrei Betlen
34111788fe
feat: Update llama.cpp
2024-02-26 10:58:41 -05:00
Andrei Betlen
5fc4c1efb6
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into main
2024-02-25 21:15:54 -05:00
Andrei Betlen
8e03fd9957
chore: Bump version
2024-02-25 21:15:42 -05:00
Andrei Betlen
e857c133fb
feat: Update llama.cpp
2024-02-25 21:14:01 -05:00
Andrei Betlen
252e1ff2b4
docs(examples): Add huggingface pull example
2024-02-25 21:09:41 -05:00
Andrei Betlen
bd4ec2e612
docs(examples): Add gradio chat example
2024-02-25 21:09:13 -05:00
Andrei Betlen
dcf38f6141
fix: remove prematurely commited change
2024-02-25 21:00:37 -05:00