ollama/llama
Jesse Gross 0077e22d52 runner.go: Handle truncation of tokens for stop sequences
When a single token contains both text to be return and a stop
sequence, this causes an out of bounds error when we update the
cache to match our text. This is because we currently assume that
the removing the stop sequence will consume at least one token.

This also inverts the logic to deal with positive numbers, rather
than a value to be subtracted, which is easier to reason about.

Fixes #7153
2024-10-09 20:39:04 -07:00
..
ggml-cuda Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llamafile Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
make Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
patches Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
runner runner.go: Handle truncation of tokens for stop sequences 2024-10-09 20:39:04 -07:00
.gitignore Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
base64.hpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
build-info.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
clip.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
clip.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
common.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
common.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
Dockerfile Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-aarch64.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-aarch64.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-alloc.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-alloc.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-backend-impl.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-backend.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-backend.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-blas.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-blas.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-common.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-cuda.cu Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-cuda.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-impl.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-metal.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-metal.metal Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-metal_darwin_arm64.m Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-quants.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-quants.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
grammar-parser.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
grammar-parser.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
json-schema-to-grammar.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
json-schema-to-grammar.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
json.hpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-grammar.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-grammar.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-impl.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-sampling.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-sampling.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-vocab.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-vocab.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama.go Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama_darwin.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama_darwin.go Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama_test.go Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llava.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llava.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
log.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
Makefile Fix build leakages (#7141) 2024-10-08 13:04:59 -07:00
README.md Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sampling.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sampling.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sampling_ext.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sampling_ext.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sgemm.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sgemm.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
stb_image.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sync.sh Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
unicode-data.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
unicode-data.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
unicode.cpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
unicode.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00

llama

This package integrates the llama.cpp library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these shared libraries are created:

  • ggml_cuda.dll on Windows or ggml_cuda.so on Linux
  • ggml_hipblas.dll on Windows or ggml_hipblas.so on Linux

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1:

make ggml_cuda.so
go build -tags avx,cuda .

ROCm

Install the CUDA toolkit v11.3.1:

make ggml_hipblas.so
go build -tags avx,rocm .

Windows

Download w64devkit for a simple MinGW development environment.

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

make ggml_cuda.dll
go build -tags avx,cuda .

ROCm

Install ROCm 5.7.1.

make ggml_hipblas.dll
go build -tags avx,rocm .

Building runners

# build all runners for this platform
make -j

Syncing with llama.cpp

To update this package to the latest llama.cpp code, use the sync.sh script:

./sync.sh ../../llama.cpp