ollama/llama
Gabe Goodhart f2890a4494
IBM granite/granitemoe architecture support (#6760)
* fix(ext_server): Port llama.cpp sampling refactors to ext_server

This was a fairly large changeset. I closely followed the changes here:
df270ef745

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(server.cpp): Refactor server.cpp logging for llama.cpp overhaul

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Bump llama.cpp to the latest master with `granite` support

This does not yet have granite MoE support, but that can come in a
follow up PR

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(patches): Update all patches (except solar-pro) to work with bumped llama.cpp

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(solar): Update solar patch for llama.cpp bump

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(llama.cpp): Bump llama.cpp for granitemoe support

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(llama.cpp): Bump llama.cpp for granitemoe support

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(solar): Update the solar-pro patch for latest llama.cpp bump

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(llama.cpp): Bump to the latest master of llama.cpp

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(patches): Update all patches for latest bump

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(llama): Always run sync.sh from the right directory

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama/patches): Update llama patches

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(llama)!: Rough sync with llama.cpp submodule

There are a number of changes that will need to be propagated to llama.go
before any of this works!

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama/patches): Add a patch and update for missing ggml-impl.h include

This include is where the ggml_cgraph struct is defined. It is included in
many of the .c files to define the forward declartion in ggml.h. It seems
that with the subset of code included here, the import was somehow lost (or
out-of-order) when building, so adding this include to llama.cpp fixes the
missing definition.

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama/sync): Add missing ggml-cpu-impl.h copy-over in sync.sh

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Add missing log.cpp

This was added as part of the logging overhaul done in llama.cpp

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Overhaul use of sampling module for llama.cpp changes

The changes here reflect the changes made in the big llama.cpp sampling PR
https://github.com/ggerganov/llama.cpp/pull/9294

The sampling functionality is now broken into the base interface
(llama_sampler) and the generation implementation (gpt_sampler). The
changes here reflect that. Since the sampling.h/sampling.cpp code uses c++
STL headers, the sampling_ext.[h|cpp] wrapper is maintained to allow go to
access a pure-C interface.

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Fix the impl of SampleTokenGreedy for new sampling

I don't think this method is currently used, so it could probably just be
removed so that all sampling goes through the GPT interface, but in the
interest of doing no harm, this should keep the method working as expected.

Branch: IBMGraniteArchitectureSupport

* fix(llama): Remove unused SampleTokenGreedy

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(sync): Remove bash-specific change to sync.sh

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* chore(gofumpt): Format on llama.go to pass linting

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llm): Fix missing <thread> include in ext_server

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Remove TODO about grammar_first

This feature was not used/needed previously so should be fine without
plumbing it through now.

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Better naming for sampling wrapper and args

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Fix patch 05 to use new wrapper api and re-sync

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* runner: Flush pending responses before returning

If there are any pending reponses (such as from potential stop
tokens) then we should send them back before ending the sequence.
Otherwise, we can be missing tokens at the end of a response.

Fixes #6707

* fix(llama/sampling): Use gpt_sampler with a forward declaration

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llama): Remove unnecessary patch for gguf impl header

This was caused by an earlier mistake in the embeddings patch that was
dereferencing the pointer instead of using the wrapper API.

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(llm): Remove use of deprecated --log-disable flag

Branch: IBMGraniteArchitectureSupport

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2024-10-17 11:59:52 -07:00
..
ggml-cuda IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llamafile Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
make Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
patches IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
runner IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
.gitignore Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
base64.hpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
build-info.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
clip.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
clip.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
common.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
common.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
Dockerfile Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
ggml-aarch64.c IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-aarch64.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-alloc.c IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-alloc.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-backend-impl.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-backend.c IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-backend.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-blas.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-blas.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-common.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-cpu-impl.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-cuda.cu IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-cuda.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-impl.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-metal.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-metal.metal IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-metal_darwin_arm64.m IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-quants.c IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml-quants.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml.c IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
ggml.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
json-schema-to-grammar.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
json-schema-to-grammar.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
json.hpp Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama-grammar.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama-grammar.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama-impl.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama-sampling.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama-sampling.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama-vocab.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama-vocab.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama.go IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llama_darwin.c Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama_darwin.go Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llama_test.go Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
llava.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
llava.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
log.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
log.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
Makefile Fix build leakages (#7141) 2024-10-08 13:04:59 -07:00
README.md Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sampling.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
sampling.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
sampling_ext.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
sampling_ext.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
sgemm.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
sgemm.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
stb_image.h Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
sync.sh IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
unicode-data.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
unicode-data.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
unicode.cpp IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00
unicode.h IBM granite/granitemoe architecture support (#6760) 2024-10-17 11:59:52 -07:00

llama

This package integrates the llama.cpp library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these shared libraries are created:

  • ggml_cuda.dll on Windows or ggml_cuda.so on Linux
  • ggml_hipblas.dll on Windows or ggml_hipblas.so on Linux

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1:

make ggml_cuda.so
go build -tags avx,cuda .

ROCm

Install the CUDA toolkit v11.3.1:

make ggml_hipblas.so
go build -tags avx,rocm .

Windows

Download w64devkit for a simple MinGW development environment.

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

make ggml_cuda.dll
go build -tags avx,cuda .

ROCm

Install ROCm 5.7.1.

make ggml_hipblas.dll
go build -tags avx,rocm .

Building runners

# build all runners for this platform
make -j

Syncing with llama.cpp

To update this package to the latest llama.cpp code, use the sync.sh script:

./sync.sh ../../llama.cpp