This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
29715dbca7
ollama
/
llm
/
payload_linux.go
9 lines
99 B
Go
Raw
Normal View
History
Unescape
Escape
Always dynamically load the llm server library This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
2024-01-10 04:29:58 +00:00
package
llm
import
(
"embed"
)
Revamp ROCm support This refines where we extract the LLM libraries to by adding a new OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already idempotenent, so this should speed up startups after the first time a new release is deployed. It also cleans up after itself. We now build only a single ROCm version (latest major) on both windows and linux. Given the large size of ROCms tensor files, we split the dependency out. It's bundled into the installer on windows, and a separate download on windows. The linux install script is now smart and detects the presence of AMD GPUs and looks to see if rocm v6 is already present, and if not, then downloads our dependency tar file. For Linux discovery, we now use sysfs and check each GPU against what ROCm supports so we can degrade to CPU gracefully instead of having llama.cpp+rocm assert/crash on us. For Windows, we now use go's windows dynamic library loading logic to access the amdhip64.dll APIs to query the GPU information.
2024-02-16 01:15:09 +00:00
//go:embed llama.cpp/build/linux/*/*/lib/*
Always dynamically load the llm server library This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
2024-01-10 04:29:58 +00:00
var
libEmbed
embed
.
FS
Reference in a new issue
Copy permalink