Andrei Betlen
|
5075c16fcc
|
Bugfix: n_batch should always be <= n_ctx
|
2023-04-04 13:08:21 -04:00 |
|
Mug
|
c862e8bac5
|
Fix repeating instructions and an antiprompt bug
|
2023-04-04 17:54:47 +02:00 |
|
Andrei Betlen
|
248b0566fa
|
Update README
|
2023-04-04 10:57:22 -04:00 |
|
Mug
|
9cde7973cc
|
Fix stripping instruction prompt
|
2023-04-04 16:20:27 +02:00 |
|
Mug
|
da5a6a7089
|
Added instruction mode, fixed infinite generation, and various other fixes
|
2023-04-04 16:18:26 +02:00 |
|
Mug
|
0b32bb3d43
|
Add instruction mode
|
2023-04-04 11:48:48 +02:00 |
|
Andrei Betlen
|
ffe34cf64d
|
Allow user to set llama config from env vars
|
2023-04-04 00:52:44 -04:00 |
|
Andrei Betlen
|
05eb2087d8
|
Small fixes for examples
|
2023-04-03 20:33:07 -04:00 |
|
Andrei Betlen
|
caf3c0362b
|
Add return type for default __call__ method
|
2023-04-03 20:26:08 -04:00 |
|
Andrei Betlen
|
4aa349d777
|
Add docstring for create_chat_completion
|
2023-04-03 20:24:20 -04:00 |
|
Andrei Betlen
|
4615f1e520
|
Add chat completion method to docs
|
2023-04-03 20:14:03 -04:00 |
|
Andrei Betlen
|
5cf29d0231
|
Bump version
|
2023-04-03 20:13:46 -04:00 |
|
Andrei Betlen
|
7fedf16531
|
Add support for chat completion
|
2023-04-03 20:12:44 -04:00 |
|
Andrei Betlen
|
3dec778c90
|
Update to more sensible return signature
|
2023-04-03 20:12:14 -04:00 |
|
Andrei Betlen
|
f7ab8d55b2
|
Update context size defaults Close #11
|
2023-04-03 20:11:13 -04:00 |
|
Andrei Betlen
|
c0a5c0171f
|
Add embed back into documentation
|
2023-04-03 18:53:00 -04:00 |
|
Andrei Betlen
|
adf656d542
|
Bump version
|
2023-04-03 18:46:49 -04:00 |
|
Andrei Betlen
|
ae004eb69e
|
Fix #16
|
2023-04-03 18:46:19 -04:00 |
|
Mug
|
f1615f05e6
|
Chat llama.cpp example implementation
|
2023-04-03 22:54:46 +02:00 |
|
Andrei Betlen
|
7d1977e8f0
|
Bump version
|
2023-04-03 14:49:36 -04:00 |
|
Andrei Betlen
|
4530197629
|
Update llama.cpp
|
2023-04-03 14:49:07 -04:00 |
|
Andrei
|
1d9a988644
|
Merge pull request #10 from MillionthOdin16/patch-1
Improve Shared Library Loading Mechanism
|
2023-04-03 14:47:11 -04:00 |
|
MillionthOdin16
|
a0758f0077
|
Update llama_cpp.py with PR requests
lib_base_name and load_shared_library
to
_lib_base_name and _load_shared_library
|
2023-04-03 13:06:50 -04:00 |
|
MillionthOdin16
|
a40476e299
|
Update llama_cpp.py
Make shared library code more robust with some platform specific functionality and more descriptive errors when failures occur
|
2023-04-02 21:50:13 -04:00 |
|
Andrei Betlen
|
b9a4513363
|
Update README
|
2023-04-02 21:03:39 -04:00 |
|
Andrei Betlen
|
7284adcaa8
|
Bump version
|
2023-04-02 13:36:07 -04:00 |
|
Andrei Betlen
|
1ed8cd023d
|
Update llama_cpp and add kv_cache api support
|
2023-04-02 13:33:49 -04:00 |
|
Andrei Betlen
|
74061b209d
|
Bump version
|
2023-04-02 03:59:47 -04:00 |
|
Andrei Betlen
|
4f509b963e
|
Bugfix: Stop sequences and missing max_tokens check
|
2023-04-02 03:59:19 -04:00 |
|
Andrei Betlen
|
42dd11c2b4
|
Bump version
|
2023-04-02 00:10:46 -04:00 |
|
Andrei Betlen
|
2bc184dc63
|
Add new methods to docs
|
2023-04-02 00:09:51 -04:00 |
|
Andrei Betlen
|
353e18a781
|
Move workaround to new sample method
|
2023-04-02 00:06:34 -04:00 |
|
Andrei Betlen
|
a4a1bbeaa9
|
Update api to allow for easier interactive mode
|
2023-04-02 00:02:47 -04:00 |
|
Andrei Betlen
|
eef627c09c
|
Fix example documentation
|
2023-04-01 17:39:35 -04:00 |
|
Andrei Betlen
|
a836639822
|
Bump version
|
2023-04-01 17:37:05 -04:00 |
|
Andrei Betlen
|
1e4346307c
|
Add documentation for generate method
|
2023-04-01 17:36:30 -04:00 |
|
Andrei Betlen
|
33f1529c50
|
Bump version
|
2023-04-01 17:30:47 -04:00 |
|
Andrei Betlen
|
f14a31c936
|
Document generate method
|
2023-04-01 17:29:43 -04:00 |
|
Andrei Betlen
|
67c70cc8eb
|
Add static methods for beginning and end of sequence tokens.
|
2023-04-01 17:29:30 -04:00 |
|
Andrei Betlen
|
caff127836
|
Remove commented out code
|
2023-04-01 15:13:01 -04:00 |
|
Andrei Betlen
|
f28bf3f13d
|
Bugfix: enable embeddings for fastapi server
|
2023-04-01 15:12:25 -04:00 |
|
Andrei Betlen
|
c25b7dfc86
|
Bump version
|
2023-04-01 13:06:05 -04:00 |
|
Andrei Betlen
|
ed6f2a049e
|
Add streaming and embedding endpoints to fastapi example
|
2023-04-01 13:05:20 -04:00 |
|
Andrei Betlen
|
0503e7f9b4
|
Update api
|
2023-04-01 13:04:12 -04:00 |
|
Andrei Betlen
|
9f975ac44c
|
Add development section
|
2023-04-01 13:03:56 -04:00 |
|
Andrei Betlen
|
9fac0334b2
|
Update embedding example to new api
|
2023-04-01 13:02:51 -04:00 |
|
Andrei Betlen
|
5e011145c5
|
Update low level api example
|
2023-04-01 13:02:10 -04:00 |
|
Andrei Betlen
|
5f2e822b59
|
Rename inference example
|
2023-04-01 13:01:45 -04:00 |
|
Andrei Betlen
|
318eae237e
|
Update high-level api
|
2023-04-01 13:01:27 -04:00 |
|
Andrei Betlen
|
3af274cbd4
|
Update llama.cpp
|
2023-04-01 13:00:09 -04:00 |
|