2023-05-26 21:32:34 +00:00
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog ](https://keepachangelog.com/en/1.0.0/ ),
and this project adheres to [Semantic Versioning ](https://semver.org/spec/v2.0.0.html ).
## [Unreleased]
2023-09-13 05:47:58 +00:00
## [0.2.2]
2023-09-13 06:50:27 +00:00
- Fix bug in pip install of v0.2.1 due to scikit-build-core removing all `.metal` files in the source distribution (see #701 )
2023-09-13 05:47:58 +00:00
2023-09-13 00:59:54 +00:00
## [0.2.1]
2023-09-13 06:50:27 +00:00
- Fix bug in pip install of v0.2.0 due to .git folder being included in the source distribution (see #701 )
2023-09-13 00:59:54 +00:00
## [0.2.0]
2023-09-13 06:50:27 +00:00
- Migrated to scikit-build-core build system by @abetlen in #499
- Use `numpy` views for `LogitsProcessor` and `StoppingCriteria` instead of python lists by @abetlen in #499
- Drop support for end-of-life Python3.7 by @abetlen in #499
- Convert low level `llama.cpp` constants to use basic python types instead of `ctypes` types by @abetlen in #499
2023-09-13 00:59:54 +00:00
2023-09-13 06:50:27 +00:00
## [0.1.85]
- Add `llama_cpp.__version__` attribute by @janvdp in #684
- Fix low level api examples by @jbochi in #680
## [0.1.84]
- Update llama.cpp
## [0.1.83]
- Update llama.cpp
## [0.1.82]
- Update llama.cpp
## [0.1.81]
2023-08-25 19:18:15 +00:00
2023-09-13 06:50:27 +00:00
- Update llama.cpp
## [0.1.80]
- Update llama.cpp
## [0.1.79]
2023-08-25 19:18:15 +00:00
- GGUF Support (breaking change requiring new model format)
2023-08-18 03:17:56 +00:00
## [0.1.78]
- Grammar based sampling via LlamaGrammar which can be passed to completions
- Make n_gpu_layers == -1 offload all layers
2023-07-24 18:11:21 +00:00
## [0.1.77]
- (llama.cpp) Update llama.cpp add support for LLaMa 2 70B
- (server) Add temporary n_gqa and rms_norm_eps parameters required for LLaMa 2 70B
2023-07-24 17:12:38 +00:00
## [0.1.76]
- (llama.cpp) Update llama.cpp add support for LLaMa 2 70B
2023-07-21 16:41:59 +00:00
## [0.1.75]
- Update llama.cpp
2023-07-20 22:56:29 +00:00
## [0.1.74]
- (server) OpenAI style error responses
2023-07-18 17:54:51 +00:00
## [0.1.73]
- (server) Add rope parameters to server settings
2023-07-15 21:13:55 +00:00
## [0.1.72]
- (llama.cpp) Update llama.cpp added custom_rope for extended context lengths
2023-07-14 03:32:06 +00:00
## [0.1.71]
- (llama.cpp) Update llama.cpp
- (server) Fix several pydantic v2 migration bugs
2023-07-09 22:20:04 +00:00
## [0.1.70]
2023-07-09 22:13:41 +00:00
- (Llama.create_completion) Revert change so that `max_tokens` is not truncated to `context_size` in `create_completion`
- (server) Fixed changed settings field names from pydantic v2 migration
2023-07-09 15:44:29 +00:00
## [0.1.69]
2023-07-07 07:04:17 +00:00
- (server) Streaming requests can are now interrupted pre-maturely when a concurrent request is made. Can be controlled with the `interrupt_requests` setting.
2023-07-08 07:37:12 +00:00
- (server) Moved to fastapi v0.100.0 and pydantic v2
- (docker) Added a new "simple" image that builds llama.cpp from source when started.
- (server) performance improvements by avoiding unnecessary memory allocations during sampling
2023-07-07 07:04:17 +00:00
2023-07-05 05:06:46 +00:00
## [0.1.68]
- (llama.cpp) Update llama.cpp
2023-06-29 04:46:15 +00:00
## [0.1.67]
- Fix performance bug in Llama model by pre-allocating memory tokens and logits.
- Fix bug in Llama model where the model was not free'd after use.
2023-06-26 12:53:54 +00:00
## [0.1.66]
- (llama.cpp) New model API
- Performance issue during eval caused by looped np.concatenate call
- State pickling issue when saving cache to disk
2023-06-20 15:25:44 +00:00
## [0.1.65]
2023-06-20 15:25:10 +00:00
- (llama.cpp) Fix struct misalignment bug
2023-06-18 13:48:43 +00:00
## [0.1.64]
2023-06-17 17:37:14 +00:00
- (llama.cpp) Update llama.cpp
2023-06-17 17:39:48 +00:00
- Fix docs for seed. Set -1 for random.
2023-06-17 17:37:14 +00:00
## [0.1.63]
2023-06-15 02:15:44 +00:00
- (llama.cpp) Add full gpu utilisation in CUDA
- (llama.cpp) Add get_vocab
- (llama.cpp) Add low_vram parameter
- (server) Add logit_bias parameter
2023-06-10 22:19:48 +00:00
## [0.1.62]
2023-06-10 16:22:39 +00:00
- Metal support working
- Cache re-enabled
2023-06-10 03:25:38 +00:00
## [0.1.61]
- Fix broken pip installation
2023-06-09 15:52:07 +00:00
## [0.1.60]
2023-09-13 06:50:27 +00:00
NOTE: This release was deleted due to a bug with the packaging system that caused pip installations to fail.
2023-06-09 15:10:24 +00:00
2023-06-09 15:01:42 +00:00
- Truncate max_tokens in create_completion so requested tokens doesn't exceed context size.
2023-06-09 15:10:24 +00:00
- Temporarily disable cache for completion requests
2023-06-09 15:01:42 +00:00
2023-06-08 07:26:49 +00:00
## [v0.1.59]
2023-06-06 21:01:10 +00:00
- (llama.cpp) k-quants support
- (server) mirostat sampling parameters to server
2023-06-08 07:26:49 +00:00
- Support both `.so` and `.dylib` for `libllama` on MacOS
2023-06-05 03:31:51 +00:00
## [v0.1.58]
2023-06-06 21:01:10 +00:00
- (llama.cpp) Metal Silicon support
2023-06-05 03:30:42 +00:00
2023-06-05 03:30:10 +00:00
## [v0.1.57]
2023-06-06 21:01:10 +00:00
- (llama.cpp) OpenLlama 3B support
2023-06-05 03:30:10 +00:00
2023-05-30 07:07:36 +00:00
## [v0.1.56]
2023-06-06 21:01:10 +00:00
- (misc) Added first version of the changelog
- (server) Use async routes
- (python-api) Use numpy for internal buffers to reduce memory usage and improve performance.
- (python-api) Performance bug in stop sequence check slowing down streaming.