13 KiB
13 KiB
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
[0.2.23]
- Update llama.cpp to ggerganov/llama.cpp@948ff137ec
- Add qwen chat format by @yhfgyyf in #1005
- Add support for running the server with SSL by @rgerganov in #994
- Replace logits_to_logprobs implementation with numpy equivalent to llama.cpp by @player1537 in #991
- Fix UnsupportedOperation: fileno in suppress_stdout_stderr by @zocainViken in #961
- Add Pygmalion chat format by @chiensen in #986
- README.md multimodal params fix by @zocainViken in #967
- Fix minor typo in README by @aniketmaurya in #958
[0.2.22]
- Update llama.cpp to ggerganov/llama.cpp@8a7b2fa528
- Fix conflict with transformers library by kddubey in #952
[0.2.21]
- Update llama.cpp to ggerganov/llama.cpp@64e64aa255
- Make building llava optional by setting
CMAKE_ARGS="-DLLAVA_BUILD=OFF"
and usingLLAVA_CPP_LIB
to specify alternative path to shared library by @abetlen ine3941d9c67
[0.2.20]
- Update llama.cpp to ggerganov/llama.cpp@b38a16dfcf
- Add
zephyr
chat format by @fakerybakery in #938 - Add
baichuan
chat format by @caiyesd in #938 - Add
baichuan-2
chat format by @caiyesd in #936 - Improve documentation for server chat formats by @jooray in #934
- Fix typo in README by @antonvice in 940
- Fix typo in the Open Orca chat format by @gardner in #947
[0.2.19]
- Update llama.cpp to ggerganov/llama.cpp@0b871f1a04
- Fix #569: stop parameter in chat completion api should accept str by @abetlen in
128dc4731f
- Document server host and port parameters by @jamesbraza in #768
- Do not set grammar to None when initializing LlamaGrammar by @mthuurne in #834
- Add mistrallite, intel, and openchat formats by @fakerybakery in #927
- Add support for min_p parameter by @tk-master in #921
- Fix #929: tokenizer adding leading space when generating from empty prompt by @abetlen in
a34d480141
- Fix low level api example by @zocainViken in #925
- Fix missing package in openblas docker image by @ZisisTsatsas in #920
[0.2.18]
- Update llama.cpp to ggerganov/llama.cpp@6bb4908a17
[0.2.17]
- Update llama.cpp to ggerganov/llama.cpp@df9d1293de
- Hotfix: Set
CUDA_ARCHITECTURES=OFF
forllava_shared
target on Windows by @abetlen in4388f33414
[0.2.16]
- Update llama.cpp to ggerganov/llama.cp@a75fa576ab
- Add
set_seed
toLlama
class by @abetlen infd41ed3a90
- Fix server doc arguments by @kjunggithub in #892
- Fix response_format handler in llava chat handler by @abetlen in
b62c449839
- Fix default max_tokens, chat completion is now unlimited (to context length) and completion is 16 tokens to match OpenAI defaults by @abetlen in
e7962d2c73
- Fix json_schema_to_gbnf helper so that it takes a json schema string as input instead by @abetlen in
faeae181b1
- Add support for $ref and $def in json_schema_to_gbnf to handle more complex function schemas by @abetlen in
770df34436
- Update functionary chat handler for new OpenAI api by abetlen in
1b376c62b7
- Fix add default stop sequence to chatml chat format by @abetlen in
b84d76a844
- Fix sampling bug when logits_all=False by @abetlen in
6f0b0b1b84
[0.2.15]
- Update llama.cpp to ggerganov/llama.cpp@0a7c980b6f
- Add support for Llava1.5 multimodal models by @damian0815 and @abetlen in #821
- Update OpenAI API compatibility to match dev day update by @abetlen in #821
- Add seed parameter to completion and chat_completion functions of Llama class by @abetlen in
86aeb9f3a1
- Add JSON mode support to constrain chat completion to JSON objects by @abetlen in
b30b9c338b
[0.2.14]
- Update llama.cpp to ggerganov/llama.cpp@f0b30ef7dc
- Add support for Huggingface Autotokenizer Chat Formats by @bioshazard and @abetlen in #790 and
bbffdaebaa
- Fix llama-2 chat format by @earonesty in #869
- Add support for functionary chat format by @abetlen in #784
- Migrate inference from deprecated
llama_eval
API tollama_batch
andllama_decode
by @abetlen in #795
[0.2.13]
- Update llama.cpp to ggerganov/llama.cpp@51b2fc11f7
- Fix name 'open' is not defined exception when deleting model by @abetlen in
011b95d7f3
- Fix tokenization of special characters by @antoine-lizee in #850
[0.2.12]
- Update llama.cpp to ggerganov/llama.cpp@50337961a6
- Fix missing
n_seq_id
inllama_batch
by @NickAlgra in #842 - Fix for shared libraries on Windows that start with
lib
prefix by @sujeendran in #848 - Fix exception raised in
__del__
when freeing models by @cebtenzzre in #846 - Performance improvement for logit bias by @zolastro in #851
- Fix suffix check arbitrary code execution bug by @mtasic85 in #854
- Fix typo in
function_call
parameter inllama_types.py
by @akatora28 in #849 - Fix streaming not returning
finish_reason
by @gmcgoldr in #798 - Fix
n_gpu_layers
check to allow values less than 1 for server by @hxy9243 in #826 - Supppress stdout and stderr when freeing model by @paschembri in #803
- Fix
llama2
chat format by @delock in #808 - Add validation for tensor_split size by @eric1932 #820
- Print stack trace on server error by @abetlen in
d6a130a052
- Update docs for gguf by @johnccshen in #783
- Add
chatml
chat format by @abetlen in305482bd41
[0.2.11]
- Fix bug in
llama_model_params
object has no attributelogits_all
by @abetlen ind696251fbe
[0.2.10]
- Fix bug 'llama_model_params' object has no attribute 'embedding' by @abetlen in
42bb721d64
[0.2.9]
- Fix critical bug in pip installation of v0.2.8 due to
.git
directory inac853e01e1
[0.2.8]
- Update llama.cpp to ggerganov/llama.cpp@40e07a60f9
- Add configurable chat formats by @abetlen in #711
- Fix rope scaling bug by @Josh-XT in #767
- Fix missing numa parameter in server by @abetlen in
d9bce17794
[0.2.7]
- Update llama.cpp to ggerganov/llama.cpp@a98b1633d5
- Install required runtime dlls to package directory on windows by @abetlen in
8d75016549
- Add openai-processing-ms to server response header by @Tradunsky in #748
- Bump minimum version of scikit-build-core to 0.5.1 to fix msvc cmake issue by @abetlen in
1ed0f3ebe1
- Update
llama_types.py
to better match the openai api, old names are aliased to new ones by @abetlen indbca136fea
[0.2.6]
- Update llama.cpp to 80291a1d02a07f7f66666fb576c5b1e75aa48b46
[0.2.5]
- Fix docker images missing starlette-context dependency by @abetlen in
2291798900
- Fix loading dll in Windows Isolation Containers by @abetlen in
8474665625
- Fix build issue on m1 macs by @abetlen in
dbd3a6d1ed
- Update docs to gguf and add hw acceleration docs for server by @jasonacox in #688
[0.2.4]
- Add NUMA support. NOTE low level api users must call llama_backend_init at the start of their programs by abetlen in
f4090a0bb2
- Fix tensor_split server cli argument by @abetlen in
c4c440ba2d
- Made all
Llama
init parameters into keyword-only parameters by @abetlen inc8f9b8a734
- Added server params for
low_vram
,main_gpu
,lora_base
, andlora_path
by @abetlen in2920c4bf7e
- Removed server params for
rms_norm_eps
andn_gqa
by @abetlen in2920c4bf7e
- Fix boolean cli options by @abetlen in
c999325e8e
and0449d29b9f
- Silence Pydantic Settings warnings about
model_alias
setting by @earonesty in #705
[0.2.3]
- Update llama.cpp to ggerganov/llama.cpp@71ca2fad7d
- Add X-Request-ID request header for mirroring custom IDs by @devrimcavusoglu in #703
- Add pyproject extra for scikit-build-core to ensure compatible pathspec version by @abetlen in
6cfc54284b
- Fix issue with Literal and Optional cli arguments not working by @abetlen in #702
[0.2.2]
- Fix bug in pip install of v0.2.1 due to scikit-build-core removing all
.metal
files in the source distribution (see #701)
[0.2.1]
- Fix bug in pip install of v0.2.0 due to .git folder being included in the source distribution (see #701)
[0.2.0]
- Migrated to scikit-build-core build system by @abetlen in #499
- Use
numpy
views forLogitsProcessor
andStoppingCriteria
instead of python lists by @abetlen in #499 - Drop support for end-of-life Python3.7 by @abetlen in #499
- Convert low level
llama.cpp
constants to use basic python types instead ofctypes
types by @abetlen in #499
[0.1.85]
- Add
llama_cpp.__version__
attribute by @janvdp in #684 - Fix low level api examples by @jbochi in #680
[0.1.84]
- Update llama.cpp
[0.1.83]
- Update llama.cpp
[0.1.82]
- Update llama.cpp
[0.1.81]
- Update llama.cpp
[0.1.80]
- Update llama.cpp
[0.1.79]
- GGUF Support (breaking change requiring new model format)
[0.1.78]
- Grammar based sampling via LlamaGrammar which can be passed to completions
- Make n_gpu_layers == -1 offload all layers
[0.1.77]
- (llama.cpp) Update llama.cpp add support for LLaMa 2 70B
- (server) Add temporary n_gqa and rms_norm_eps parameters required for LLaMa 2 70B
[0.1.76]
- (llama.cpp) Update llama.cpp add support for LLaMa 2 70B
[0.1.75]
- Update llama.cpp
[0.1.74]
- (server) OpenAI style error responses
[0.1.73]
- (server) Add rope parameters to server settings
[0.1.72]
- (llama.cpp) Update llama.cpp added custom_rope for extended context lengths
[0.1.71]
-
(llama.cpp) Update llama.cpp
-
(server) Fix several pydantic v2 migration bugs
[0.1.70]
- (Llama.create_completion) Revert change so that
max_tokens
is not truncated tocontext_size
increate_completion
- (server) Fixed changed settings field names from pydantic v2 migration
[0.1.69]
- (server) Streaming requests can are now interrupted pre-maturely when a concurrent request is made. Can be controlled with the
interrupt_requests
setting. - (server) Moved to fastapi v0.100.0 and pydantic v2
- (docker) Added a new "simple" image that builds llama.cpp from source when started.
- (server) performance improvements by avoiding unnecessary memory allocations during sampling
[0.1.68]
- (llama.cpp) Update llama.cpp
[0.1.67]
- Fix performance bug in Llama model by pre-allocating memory tokens and logits.
- Fix bug in Llama model where the model was not free'd after use.
[0.1.66]
-
(llama.cpp) New model API
-
Performance issue during eval caused by looped np.concatenate call
-
State pickling issue when saving cache to disk
[0.1.65]
- (llama.cpp) Fix struct misalignment bug
[0.1.64]
- (llama.cpp) Update llama.cpp
- Fix docs for seed. Set -1 for random.
[0.1.63]
- (llama.cpp) Add full gpu utilisation in CUDA
- (llama.cpp) Add get_vocab
- (llama.cpp) Add low_vram parameter
- (server) Add logit_bias parameter
[0.1.62]
- Metal support working
- Cache re-enabled
[0.1.61]
- Fix broken pip installation
[0.1.60]
NOTE: This release was deleted due to a bug with the packaging system that caused pip installations to fail.
- Truncate max_tokens in create_completion so requested tokens doesn't exceed context size.
- Temporarily disable cache for completion requests
[v0.1.59]
- (llama.cpp) k-quants support
- (server) mirostat sampling parameters to server
- Support both
.so
and.dylib
forlibllama
on MacOS
[v0.1.58]
- (llama.cpp) Metal Silicon support
[v0.1.57]
- (llama.cpp) OpenLlama 3B support
[v0.1.56]
- (misc) Added first version of the changelog
- (server) Use async routes
- (python-api) Use numpy for internal buffers to reduce memory usage and improve performance.
- (python-api) Performance bug in stop sequence check slowing down streaming.