Andrei Betlen
97aa3a153d
docs: Add information re: auto chat formats. Closes #1236
2024-03-01 13:10:25 -05:00
Andrei Betlen
f062a7f51d
feat: Update llama.cpp
2024-03-01 12:57:16 -05:00
Douglas Hanley
cf1fdd8a9a
docs: fix typo in README.md embeddings example. ( #1232 )
2024-02-29 13:55:50 -05:00
Andrei Betlen
8c71725d53
fix: Remove deprecated cfg sampling functions
2024-02-28 14:37:07 -05:00
Andrei Betlen
727d60c28a
misc: Format
2024-02-28 14:27:40 -05:00
Andrei Betlen
0d37ce52b1
feat: Update llama.cpp
2024-02-28 14:27:16 -05:00
Andrei Betlen
ffcd4b2636
chore: Bump version
2024-02-28 01:38:32 -05:00
Sigbjørn Skjæret
c36ab15e68
fix: eos/bos_token set correctly for Jinja2ChatFormatter and automatic chat formatter ( #1230 )
...
The token strings were not correctly retrieved (empty).
2024-02-28 01:30:31 -05:00
Andrei Betlen
fea33c9b94
feat: Update llama.cpp
2024-02-27 12:22:17 -05:00
Andrei
4d574bd765
feat(server): Add support for pulling models from Huggingface Hub ( #1222 )
...
* Basic support for hf pull on server
* Add hf_model_repo_id setting
* Update README
2024-02-26 14:35:08 -05:00
Andrei Betlen
b3e358dee4
docs: Add example of local image loading to README
2024-02-26 11:58:33 -05:00
Andrei Betlen
afe1e445c9
chore: Bump version
2024-02-26 11:43:24 -05:00
Andrei Betlen
9558ce7878
feat: Update llama.cpp
2024-02-26 11:40:58 -05:00
Andrei Betlen
a57d5dff86
feat: Update llama.cpp
2024-02-26 11:37:43 -05:00
Andrei Betlen
79c649c2d1
docs: Update multimodal example
2024-02-26 11:34:45 -05:00
Andrei Betlen
bf315ee7a9
docs: Update multimodal example
2024-02-26 11:32:11 -05:00
Andrei Betlen
dbaba3059d
fix: positional arguments only for low-level api
2024-02-26 11:31:11 -05:00
Andrei Betlen
78e536dcfe
fix: typo
2024-02-26 11:14:26 -05:00
Andrei Betlen
44558cbd7a
misc: llava_cpp use ctypes function decorator for binding
2024-02-26 11:07:33 -05:00
Andrei Betlen
8383a9e562
fix: llava this function takes at least 4 arguments (0 given)
2024-02-26 11:03:20 -05:00
Andrei Betlen
34111788fe
feat: Update llama.cpp
2024-02-26 10:58:41 -05:00
Andrei Betlen
5fc4c1efb6
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into main
2024-02-25 21:15:54 -05:00
Andrei Betlen
8e03fd9957
chore: Bump version
2024-02-25 21:15:42 -05:00
Andrei Betlen
e857c133fb
feat: Update llama.cpp
2024-02-25 21:14:01 -05:00
Andrei Betlen
252e1ff2b4
docs(examples): Add huggingface pull example
2024-02-25 21:09:41 -05:00
Andrei Betlen
bd4ec2e612
docs(examples): Add gradio chat example
2024-02-25 21:09:13 -05:00
Andrei Betlen
dcf38f6141
fix: remove prematurely commited change
2024-02-25 21:00:37 -05:00
Andrei Betlen
cbbcd888af
feat: Update llama.cpp
2024-02-25 20:52:14 -05:00
Andrei Betlen
19234aa0db
fix: Restore type hints for low-level api
2024-02-25 16:54:37 -05:00
Andrei Betlen
2292af5796
feat: Update llama.cpp
2024-02-25 16:53:58 -05:00
Andrei Betlen
221edb9ef1
feat: Update llama.cpp
2024-02-24 23:47:29 -05:00
Andrei Betlen
20ea6fd7d6
chore: Bump version
2024-02-23 12:38:36 -05:00
Andrei Betlen
b681674bf2
docs: Fix functionary repo_id
2024-02-23 12:36:13 -05:00
Andrei Betlen
f94faab686
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into main
2024-02-23 12:34:03 -05:00
Andrei Betlen
702306b381
docs: Restore functionary docs in README
2024-02-23 12:34:02 -05:00
Jeffrey Fong
bce6dc0ac2
docs: Update Functionary OpenAI Server Readme ( #1193 )
...
* update functionary parts in server readme
* add write-up about hf tokenizer
2024-02-23 12:24:10 -05:00
Andrei Betlen
47bad30dd7
fix: LlamaHFTokenizer now receives pre_tokens
2024-02-23 12:23:24 -05:00
Andrei Betlen
ded5d627a5
chore: Bump version
2024-02-23 11:32:43 -05:00
Luke Stanley
858496224e
feat: Auto detect Mixtral's slightly different format ( #1214 )
2024-02-23 11:27:38 -05:00
Andrei Betlen
db776a885c
fix: module 'llama_cpp.llama_cpp' has no attribute 'c_uint8'
2024-02-23 11:24:53 -05:00
Andrei Betlen
427d816ebf
chore: Bump version
2024-02-23 04:54:08 -05:00
Aditya Purandare
52d9d70076
docs: Update README.md to fix pip install llama cpp server ( #1187 )
...
Without the single quotes, when running the command, an error is printed saying no matching packages found on pypi. Adding the quotes fixes it
```bash
$ pip install llama-cpp-python[server]
zsh: no matches found: llama-cpp-python[server]
```
Co-authored-by: Andrei <abetlen@gmail.com>
2024-02-23 04:41:22 -05:00
Alvaro Bartolome
251a8a2cad
feat: Add Google's Gemma formatting via chat_format="gemma"
( #1210 )
...
* Add Google's Gemma formatting via `chat_format="gemma"`
* Replace `raise ValueError` with `logger.debug`
Co-authored-by: Andrei <abetlen@gmail.com>
---------
Co-authored-by: Andrei <abetlen@gmail.com>
2024-02-23 04:40:52 -05:00
Andrei Betlen
eebb102df7
feat: Update llama.cpp
2024-02-23 03:42:08 -05:00
Andrei Betlen
5f96621e92
misc: only search tests folder for tests
2024-02-23 03:40:25 -05:00
Andrei Betlen
b9aca612af
misc: use typesafe byref for internal classes
2024-02-23 03:40:07 -05:00
Andrei Betlen
a0ce429dc0
misc: use decorator to bind low level api functions, fixes docs
2024-02-23 03:39:38 -05:00
Andrei Betlen
410e02da51
docs: Fix typo
2024-02-23 00:43:31 -05:00
Andrei Betlen
eb56ce2e2a
docs: fix low-level api example
2024-02-22 11:33:05 -05:00
Andrei Betlen
0f8cad6cb7
docs: Update README
2024-02-22 11:31:44 -05:00