llama.cpp/llama_cpp
Junpei Kawamoto 320a5d7ea5
feat: Add .close() method to Llama class to explicitly free model from memory (#1513)
* feat: add explicit methods to free model

This commit introduces a `close` method to both `Llama` and `_LlamaModel`,
allowing users to explicitly free the model from RAM/VRAM.

The previous implementation relied on the destructor of `_LlamaModel` to free
the model. However, in Python, the timing of destructor calls is unclear—for
instance, the `del` statement does not guarantee immediate invocation of the
destructor.

This commit provides an explicit method to release the model, which works
immediately and allows the user to load another model without memory issues.

Additionally, this commit implements a context manager in the `Llama` class,
enabling the automatic closure of the `Llama` object when used with the `with`
statement.

* feat: Implement ContextManager in _LlamaModel, _LlamaContext, and _LlamaBatch

This commit enables automatic resource management by
implementing the `ContextManager` protocol in `_LlamaModel`,
`_LlamaContext`, and `_LlamaBatch`. This ensures that
resources are properly managed and released within a `with`
statement, enhancing robustness and safety in resource handling.

* feat: add ExitStack for Llama's internal class closure

This update implements ExitStack to manage and close internal
classes in Llama, enhancing efficient and safe resource
management.

* Use contextlib ExitStack and closing

* Explicitly free model when closing resources on server

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2024-06-13 04:16:14 -04:00
..
server feat: Add .close() method to Llama class to explicitly free model from memory (#1513) 2024-06-13 04:16:14 -04:00
__init__.py chore: Bump version 2024-06-10 11:14:33 -04:00
_internals.py feat: Add .close() method to Llama class to explicitly free model from memory (#1513) 2024-06-13 04:16:14 -04:00
_logger.py fix: Use llama_log_callback to avoid suppress_stdout_stderr 2024-02-05 21:52:12 -05:00
_utils.py fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks. Closes #796 Closes #729 2024-04-30 15:45:34 -04:00
llama.py feat: Add .close() method to Llama class to explicitly free model from memory (#1513) 2024-06-13 04:16:14 -04:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py fix: Avoid duplicate special tokens in chat formats (#1439) 2024-06-04 10:15:41 -04:00
llama_cpp.py feat: Update llama.cpp 2024-06-07 02:02:12 -04:00
llama_grammar.py fix: UTF-8 handling with grammars (#1415) 2024-04-30 14:33:23 -04:00
llama_speculative.py Add speculative decoding (#1120) 2024-01-31 14:08:14 -05:00
llama_tokenizer.py fix: LlamaHFTokenizer now receives pre_tokens 2024-02-23 12:23:24 -05:00
llama_types.py feat: Allow for possibly non-pooled embeddings (#1380) 2024-04-25 21:32:44 -04:00
llava_cpp.py feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (#1147) 2024-04-30 01:35:38 -04:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00