* feat: add explicit methods to free model
This commit introduces a `close` method to both `Llama` and `_LlamaModel`,
allowing users to explicitly free the model from RAM/VRAM.
The previous implementation relied on the destructor of `_LlamaModel` to free
the model. However, in Python, the timing of destructor calls is unclear—for
instance, the `del` statement does not guarantee immediate invocation of the
destructor.
This commit provides an explicit method to release the model, which works
immediately and allows the user to load another model without memory issues.
Additionally, this commit implements a context manager in the `Llama` class,
enabling the automatic closure of the `Llama` object when used with the `with`
statement.
* feat: Implement ContextManager in _LlamaModel, _LlamaContext, and _LlamaBatch
This commit enables automatic resource management by
implementing the `ContextManager` protocol in `_LlamaModel`,
`_LlamaContext`, and `_LlamaBatch`. This ensures that
resources are properly managed and released within a `with`
statement, enhancing robustness and safety in resource handling.
* feat: add ExitStack for Llama's internal class closure
This update implements ExitStack to manage and close internal
classes in Llama, enhancing efficient and safe resource
management.
* Use contextlib ExitStack and closing
* Explicitly free model when closing resources on server
---------
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
* Support SPM infill
* typo--
* one less layer of parenthesis necessary
* new required internals
* manually add bos/eos if model requires it
* add bos even when unknown
This is identical behaviour to llama.cpp
I guess any model that doesn't use BOS is recent enough to have the add_bos_token metadata.
* don't add bos/eos on non-infill pre-tokenized prompt
* add tokenizer hack to remove leading space in suffix
* I keep forgetting metadata are strings
* check if bos exists
* add example
* add cls/sep instead of bos/eos for WPM vocab
* simplify
* color-code filtered suffix
---------
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
* Templates sometimes have BOS in them, remove duplicate
* tokenize chat format prompts before completion
This is to ensure that we don't duplicate any special tokens.
Hopefully I amended the existing formats correctly?
* updated comment
* corrected a few
* add some missing internals
* proper bos/eos detection
* just let tokenizer do the job
* typo--
* align test with new response
* changed to a warning
* move to another PR
* Use python warnings module
---------
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
* Disable Windows+CUDA workaround when compiling for HIPBLAS
* fix spacing
* change condition to check for Windows & CUDA
Co-authored-by: Andrei <abetlen@gmail.com>
---------
Co-authored-by: Andrei <abetlen@gmail.com>