ollama/llm
Blake Mizerany cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
Previously, some costly things were causing the loading of GGUF files
and their metadata and tensor information to be VERY slow:

  * Too many allocations when decoding strings
  * Hitting disk for each read of each key and value, resulting in a
    not-okay amount of syscalls/disk I/O.

The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro
m3.

This commit also prevents collecting large arrays of values when
decoding GGUFs (if desired). When such keys are encountered, their
values are null, and are encoded as such in JSON.

Also, this fixes a broken test that was not encoding valid GGUF.
2024-06-24 21:47:52 -07:00
..
ext_server remove confusing log message 2024-06-19 11:14:11 -07:00
generate Merge pull request #5072 from dhiltgen/windows_path 2024-06-19 09:13:39 -07:00
llama.cpp@7c26775adb llm: update llama.cpp commit to 7c26775 (#4896) 2024-06-17 15:56:16 -04:00
patches llm: update llama.cpp commit to 7c26775 (#4896) 2024-06-17 15:56:16 -04:00
filetype.go Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 2024-05-23 13:21:49 -07:00
ggla.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
ggml.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
ggml_test.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
gguf.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
llm.go revert tokenize ffi (#4761) 2024-05-31 18:54:21 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Move nested payloads to installer and zip file on windows 2024-04-23 16:14:47 -07:00
memory.go handle asymmetric embedding KVs 2024-06-20 09:57:27 -07:00
memory_test.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
payload.go Move libraries out of users path 2024-06-17 13:12:18 -07:00
server.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00