7d6eb0d4c3
Having v11 support hard-coded into the cgo settings causes warnings for newer Xcode versions. This should help keep the build clean for users building from source with the latest tools, while still allow us to target the older OS via our CI processes. |
||
---|---|---|
.. | ||
ggml-cuda | ||
llamafile | ||
make | ||
patches | ||
runner | ||
.gitignore | ||
base64.hpp | ||
build-info.cpp | ||
clip.cpp | ||
clip.h | ||
common.cpp | ||
common.h | ||
Dockerfile | ||
ggml-aarch64.c | ||
ggml-aarch64.h | ||
ggml-alloc.c | ||
ggml-alloc.h | ||
ggml-backend-impl.h | ||
ggml-backend.c | ||
ggml-backend.h | ||
ggml-blas.cpp | ||
ggml-blas.h | ||
ggml-common.h | ||
ggml-cuda.cu | ||
ggml-cuda.h | ||
ggml-impl.h | ||
ggml-metal.h | ||
ggml-metal.metal | ||
ggml-metal_darwin_arm64.m | ||
ggml-quants.c | ||
ggml-quants.h | ||
ggml.c | ||
ggml.h | ||
grammar-parser.cpp | ||
grammar-parser.h | ||
json-schema-to-grammar.cpp | ||
json-schema-to-grammar.h | ||
json.hpp | ||
llama-grammar.cpp | ||
llama-grammar.h | ||
llama-impl.h | ||
llama-sampling.cpp | ||
llama-sampling.h | ||
llama-vocab.cpp | ||
llama-vocab.h | ||
llama.cpp | ||
llama.go | ||
llama.h | ||
llama_darwin.c | ||
llama_darwin.go | ||
llama_test.go | ||
llava.cpp | ||
llava.h | ||
log.h | ||
Makefile | ||
README.md | ||
sampling.cpp | ||
sampling.h | ||
sampling_ext.cpp | ||
sampling_ext.h | ||
sgemm.cpp | ||
sgemm.h | ||
stb_image.h | ||
sync.sh | ||
unicode-data.cpp | ||
unicode-data.h | ||
unicode.cpp | ||
unicode.h |
llama
This package integrates the llama.cpp library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.
Supported:
- CPU
- avx, avx2
- macOS Metal
- Windows CUDA
- Windows ROCm
- Linux CUDA
- Linux ROCm
- Llava
Extra build steps are required for CUDA and ROCm on Windows since nvcc
and hipcc
both require using msvc as the host compiler. For these shared libraries are created:
ggml_cuda.dll
on Windows orggml_cuda.so
on Linuxggml_hipblas.dll
on Windows orggml_hipblas.so
on Linux
Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.
Building
go build .
AVX
go build -tags avx .
AVX2
# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .
Linux
CUDA
Install the CUDA toolkit v11.3.1:
make ggml_cuda.so
go build -tags avx,cuda .
ROCm
Install the CUDA toolkit v11.3.1:
make ggml_hipblas.so
go build -tags avx,rocm .
Windows
Download w64devkit for a simple MinGW development environment.
CUDA
Install the CUDA toolkit v11.3.1 then build the cuda code:
make ggml_cuda.dll
go build -tags avx,cuda .
ROCm
Install ROCm 5.7.1.
make ggml_hipblas.dll
go build -tags avx,rocm .
Building runners
# build all runners for this platform
make -j
Syncing with llama.cpp
To update this package to the latest llama.cpp code, use the sync.sh
script:
./sync.sh ../../llama.cpp