Re-introduce the llama
package (#5034)
* Re-introduce the llama package This PR brings back the llama package, making it possible to call llama.cpp and ggml APIs from Go directly via CGo. This has a few advantages: - C APIs can be called directly from Go without needing to use the previous "server" REST API - On macOS and for CPU builds on Linux and Windows, Ollama can be built without a go generate ./... step, making it easy to get up and running to hack on parts of Ollama that don't require fast inference - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners takes <5 min on a fast CPU) - No git submodule making it easier to clone and build from source This is a big PR, but much of it is vendor code except for: - llama.go CGo bindings - example/: a simple example of running inference - runner/: a subprocess server designed to replace the llm/ext_server package - Makefile an as minimal as possible Makefile to build the runner package for different targets (cpu, avx, avx2, cuda, rocm) Co-authored-by: Jesse Gross <jesse@ollama.com> Co-authored-by: Daniel Hiltgen <daniel@ollama.com> * cache: Clear old KV cache entries when evicting a slot When forking a cache entry, if no empty slots are available we evict the least recently used one and copy over the KV entries from the closest match. However, this copy does not overwrite existing values but only adds new ones. Therefore, we need to clear the old slot first. This change fixes two issues: - The KV cache fills up and runs out of space even though we think we are managing it correctly - Performance gets worse over time as we use new cache entries that are not hot in the processor caches * doc: explain golang objc linker warning (#6830) * llama: gather transitive dependencies for rocm for dist packaging (#6848) * Refine go server makefiles to be more DRY (#6924) This breaks up the monolithic Makefile for the Go based runners into a set of utility files as well as recursive Makefiles for the runners. Files starting with the name "Makefile" are buildable, while files that end with ".make" are utilities to include in other Makefiles. This reduces the amount of nearly identical targets and helps set a pattern for future community contributions for new GPU runner architectures. When we are ready to switch over to the Go runners, these files should move to the top of the repo, and we should add targets for the main CLI, as well as a helper "install" (put all the built binaries on the local system in a runnable state) and "dist" target (generate the various tar/zip files for distribution) for local developer use. * llama: don't create extraneous directories (#6988) * llama: Exercise the new build in CI (#6989) Wire up some basic sanity testing in CI for the Go runner. GPU runners are not covered yet. * llama: Refine developer docs for Go server (#6842) This enhances the documentation for development focusing on the new Go server. After we complete the transition further doc refinements can remove the "transition" discussion. * runner.go: Allocate batches for all sequences during init We should tell the model that we could have full batches for all sequences. We already do this when we allocate the batches but it was missed during initialization. * llama.go: Don't return nil from Tokenize on zero length input Potentially receiving nil in a non-error condition is surprising to most callers - it's better to return an empty slice. * runner.go: Remove stop tokens from cache If the last token is EOG then we don't return this and it isn't present in the cache (because it was never submitted to Decode). This works well for extending the cache entry with a new sequence. However, for multi-token stop sequences, we won't return any of the tokens but all but the last one will be in the cache. This means when the conversation continues the cache will contain tokens that don't overlap with the new prompt. This works (we will pick up the portion where there is overlap) but it causes unnecessary cache thrashing because we will fork the original cache entry as it is not a perfect match. By trimming the cache to the tokens that we actually return this issue can be avoided. * runner.go: Simplify flushing of pending tokens * runner.go: Update TODOs * runner.go: Don't panic when processing sequences If there is an error processing a sequence, we should return a clean HTTP error back to Ollama rather than panicing. This will make us more resilient to transient failures. Panics can still occur during startup as there is no way to serve requests if that fails. Co-authored-by: jmorganca <jmorganca@gmail.com> * runner.go: More accurately capture timings Currently prompt processing time doesn't capture the that it takes to tokenize the input, only decoding time. We should capture the full process to more accurately reflect reality. This is especially true once we start processing images where the initial processing can take significant time. This is also more consistent with the existing C++ runner. * runner.go: Support for vision models In addition to bringing feature parity with the C++ runner, this also incorporates several improvements: - Cache prompting works with images, avoiding the need to re-decode embeddings for every message in a conversation - Parallelism is supported, avoiding the need to restrict to one sequence at a time. (Though for now Ollama will not schedule them while we might need to fall back to the old runner.) Co-authored-by: jmorganca <jmorganca@gmail.com> * runner.go: Move Unicode checking code and add tests * runner.go: Export external cache members Runner and cache are in the same package so the change doesn't affect anything but it is more internally consistent. * runner.go: Image embedding cache Generating embeddings from images can take significant time (on my machine between 100ms and 8s depending on the model). Although we already cache the result of decoding these images, the embeddings need to be regenerated every time. This is not necessary if we get the same image over and over again, for example, during a conversation. This currently uses a very small cache with a very simple algorithm but it is easy to improve as is warranted. * llama: catch up on patches Carry forward solar-pro and cli-unicode patches * runner.go: Don't re-allocate memory for every batch We can reuse memory allocated from batch to batch since batch size is fixed. This both saves the cost of reallocation as well keeps the cache lines hot. This results in a roughly 1% performance improvement for token generation with Nvidia GPUs on Linux. * runner.go: Default to classic input cache policy The input cache as part of the go runner implemented a cache policy that aims to maximize hit rate in both single and multi- user scenarios. When there is a cache hit, the response is very fast. However, performance is actually slower when there is an input cache miss due to worse GPU VRAM locality. This means that performance is generally better overall for multi-user scenarios (better input cache hit rate, locality was relatively poor already). But worse for single users (input cache hit rate is about the same, locality is now worse). This defaults the policy back to the old one to avoid a regression but keeps the new one available through an environment variable OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is to improve this in the future to get the best of both worlds without user configuration. For inputs that result in cache misses, on Nvidia/Linux this change improves performance by 31% for prompt processing and 13% for token generation. * runner.go: Increase size of response channel Generally the CPU can easily keep up with handling reponses that are generated but there's no reason not to let generation continue and handle things in larger batches if needed. * llama: Add CI to verify all vendored changes have patches (#7066) Make sure we don't accidentally merge changes in the vendored code that aren't also reflected in the patches. * llama: adjust clip patch for mingw utf-16 (#7065) * llama: adjust clip patch for mingw utf-16 * llama: ensure static linking of runtime libs Avoid runtime dependencies on non-standard libraries * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS) These are two features that are shown on llama.cpp's system info that are currently different between the two runners. On my test systems the performance difference is very small to negligible but it is probably still good to equalize the features. * llm: Don't add BOS/EOS for tokenize requests This is consistent with what server.cpp currently does. It affects things like token processing counts for embedding requests. * runner.go: Don't cache prompts for embeddings Our integration with server.cpp implicitly disables prompt caching because it is not part of the JSON object being parsed, this makes the Go runner behavior similarly. Prompt caching has been seen to affect the results of text completions on certain hardware. The results are not wrong either way but they are non-deterministic. However, embeddings seem to be affected even on hardware that does not show this behavior for completions. For now, it is best to maintain consistency with the existing behavior. * runner.go: Adjust debug log levels Add system info printed at startup and quiet down noisier logging. * llama: fix compiler flag differences (#7082) Adjust the flags for the new Go server to more closely match the generate flow * llama: refine developer docs (#7121) * llama: doc and example clean up (#7122) * llama: doc and example clean up * llama: Move new dockerfile into llama dir Temporary home until we fully transition to the Go server * llama: runner doc cleanup * llama.go: Add description for Tokenize error case --------- Co-authored-by: Jesse Gross <jesse@ollama.com> Co-authored-by: Daniel Hiltgen <daniel@ollama.com> Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
This commit is contained in:
parent
de982616f1
commit
96efd9052f
289 changed files with 166141 additions and 164 deletions
2
.gitattributes
vendored
2
.gitattributes
vendored
|
@ -1,3 +1,5 @@
|
||||||
llm/ext_server/* linguist-vendored
|
llm/ext_server/* linguist-vendored
|
||||||
|
llama/**/*.{cpp,hpp,h,c,cu,cuh,m} linguist-vendored
|
||||||
|
|
||||||
* text=auto
|
* text=auto
|
||||||
*.go text eol=lf
|
*.go text eol=lf
|
||||||
|
|
54
.github/workflows/test.yaml
vendored
54
.github/workflows/test.yaml
vendored
|
@ -24,6 +24,7 @@ jobs:
|
||||||
GENERATE: ${{ steps.changes.outputs.GENERATE }}
|
GENERATE: ${{ steps.changes.outputs.GENERATE }}
|
||||||
GENERATE_CUDA: ${{ steps.changes.outputs.GENERATE_CUDA }}
|
GENERATE_CUDA: ${{ steps.changes.outputs.GENERATE_CUDA }}
|
||||||
GENERATE_ROCM: ${{ steps.changes.outputs.GENERATE_ROCM }}
|
GENERATE_ROCM: ${{ steps.changes.outputs.GENERATE_ROCM }}
|
||||||
|
RUNNERS: ${{ steps.changes.outputs.RUNNERS }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
|
@ -41,6 +42,7 @@ jobs:
|
||||||
echo GENERATE=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
echo GENERATE=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
||||||
echo GENERATE_CUDA=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
echo GENERATE_CUDA=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
||||||
echo GENERATE_ROCM=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
echo GENERATE_ROCM=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
||||||
|
echo RUNNERS=$(changed 'llama/**')
|
||||||
} >>$GITHUB_OUTPUT
|
} >>$GITHUB_OUTPUT
|
||||||
|
|
||||||
generate:
|
generate:
|
||||||
|
@ -213,6 +215,46 @@ jobs:
|
||||||
env:
|
env:
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
OLLAMA_SKIP_CPU_GENERATE: '1'
|
||||||
|
|
||||||
|
runners:
|
||||||
|
needs: [changes]
|
||||||
|
if: ${{ needs.changes.outputs.RUNNERS == 'True' }}
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
os: [ubuntu-latest, macos-latest, windows-2019]
|
||||||
|
arch: [amd64, arm64]
|
||||||
|
exclude:
|
||||||
|
- os: ubuntu-latest
|
||||||
|
arch: arm64
|
||||||
|
- os: windows-2019
|
||||||
|
arch: arm64
|
||||||
|
runs-on: ${{ matrix.os }}
|
||||||
|
env:
|
||||||
|
GOARCH: ${{ matrix.arch }}
|
||||||
|
ARCH: ${{ matrix.arch }}
|
||||||
|
CGO_ENABLED: '1'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: actions/setup-go@v5
|
||||||
|
with:
|
||||||
|
go-version-file: go.mod
|
||||||
|
cache: true
|
||||||
|
- run: go get ./...
|
||||||
|
- name: 'Build Windows Go Runners'
|
||||||
|
if: ${{ startsWith(matrix.os, 'windows-') }}
|
||||||
|
run: |
|
||||||
|
$gopath=(get-command go).source | split-path -parent
|
||||||
|
$gccpath=(get-command gcc).source | split-path -parent
|
||||||
|
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
||||||
|
cd $env:GITHUB_WORKSPACE
|
||||||
|
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
||||||
|
$env:PATH="$gopath;$gccpath;$env:PATH"
|
||||||
|
echo $env:PATH
|
||||||
|
make -C llama -j 4
|
||||||
|
- name: 'Build Unix Go Runners'
|
||||||
|
if: ${{ ! startsWith(matrix.os, 'windows-') }}
|
||||||
|
run: make -C llama -j 4
|
||||||
|
- run: go build .
|
||||||
|
|
||||||
lint:
|
lint:
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
|
@ -280,3 +322,15 @@ jobs:
|
||||||
- run: go generate ./...
|
- run: go generate ./...
|
||||||
- run: go build
|
- run: go build
|
||||||
- run: go test -v ./...
|
- run: go test -v ./...
|
||||||
|
|
||||||
|
patches:
|
||||||
|
needs: [changes]
|
||||||
|
if: ${{ needs.changes.outputs.RUNNERS == 'True' }}
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
submodules: recursive
|
||||||
|
- name: Verify patches carry all the changes
|
||||||
|
run: |
|
||||||
|
cd llama && ./sync.sh && git diff --compact-summary --exit-code .
|
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -5,7 +5,6 @@
|
||||||
.swp
|
.swp
|
||||||
dist
|
dist
|
||||||
ollama
|
ollama
|
||||||
ggml-metal.metal
|
|
||||||
.cache
|
.cache
|
||||||
*.exe
|
*.exe
|
||||||
.idea
|
.idea
|
||||||
|
|
12
Dockerfile
12
Dockerfile
|
@ -110,9 +110,6 @@ ARG CGO_CFLAGS
|
||||||
ENV GOARCH=amd64
|
ENV GOARCH=amd64
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
||||||
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS static-build-amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_CPU_TARGET="static" bash gen_linux.sh
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu-build-amd64
|
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu-build-amd64
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
||||||
|
@ -135,9 +132,6 @@ ARG CGO_CFLAGS
|
||||||
ENV GOARCH=arm64
|
ENV GOARCH=arm64
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
||||||
|
|
||||||
FROM --platform=linux/arm64 cpu-builder-arm64 AS static-build-arm64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_CPU_TARGET="static" bash gen_linux.sh
|
|
||||||
FROM --platform=linux/arm64 cpu-builder-arm64 AS cpu-build-arm64
|
FROM --platform=linux/arm64 cpu-builder-arm64 AS cpu-build-arm64
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
||||||
|
@ -148,7 +142,6 @@ FROM --platform=linux/amd64 cpu-build-amd64 AS build-amd64
|
||||||
ENV CGO_ENABLED=1
|
ENV CGO_ENABLED=1
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
COPY . .
|
COPY . .
|
||||||
COPY --from=static-build-amd64 /go/src/github.com/ollama/ollama/llm/build/ llm/build/
|
|
||||||
COPY --from=cpu_avx-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
COPY --from=cpu_avx-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
||||||
COPY --from=cpu_avx2-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
COPY --from=cpu_avx2-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
||||||
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
||||||
|
@ -171,7 +164,6 @@ ENV CGO_ENABLED=1
|
||||||
ARG GOLANG_VERSION
|
ARG GOLANG_VERSION
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
COPY . .
|
COPY . .
|
||||||
COPY --from=static-build-arm64 /go/src/github.com/ollama/ollama/llm/build/ llm/build/
|
|
||||||
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
||||||
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/build/ build/
|
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/build/ build/
|
||||||
COPY --from=cuda-12-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
COPY --from=cuda-12-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
||||||
|
@ -191,7 +183,7 @@ FROM dist-$TARGETARCH as dist
|
||||||
|
|
||||||
|
|
||||||
# Optimized container images do not cary nested payloads
|
# Optimized container images do not cary nested payloads
|
||||||
FROM --platform=linux/amd64 static-build-amd64 AS container-build-amd64
|
FROM --platform=linux/amd64 cpu-builder-amd64 AS container-build-amd64
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
COPY . .
|
COPY . .
|
||||||
ARG GOFLAGS
|
ARG GOFLAGS
|
||||||
|
@ -199,7 +191,7 @@ ARG CGO_CFLAGS
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
||||||
|
|
||||||
FROM --platform=linux/arm64 static-build-arm64 AS container-build-arm64
|
FROM --platform=linux/arm64 cpu-builder-arm64 AS container-build-arm64
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
COPY . .
|
COPY . .
|
||||||
ARG GOFLAGS
|
ARG GOFLAGS
|
||||||
|
|
|
@ -1,5 +1,8 @@
|
||||||
# Development
|
# Development
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> The `llm` package that loads and runs models is being updated to use a new [Go runner](#transition-to-go-runner): this should only impact a small set of PRs however it does change how the project is built.
|
||||||
|
|
||||||
Install required tools:
|
Install required tools:
|
||||||
|
|
||||||
- cmake version 3.24 or higher
|
- cmake version 3.24 or higher
|
||||||
|
@ -166,4 +169,182 @@ Follow the instructions at https://www.msys2.org/wiki/arm64/ to set up an arm64
|
||||||
pacman -S mingw-w64-clang-aarch64-clang mingw-w64-clang-aarch64-gcc-compat mingw-w64-clang-aarch64-make make
|
pacman -S mingw-w64-clang-aarch64-clang mingw-w64-clang-aarch64-gcc-compat mingw-w64-clang-aarch64-make make
|
||||||
```
|
```
|
||||||
|
|
||||||
You will need to ensure your PATH includes go, cmake, gcc and clang mingw32-make to build ollama from source. (typically `C:\msys64\clangarm64\bin\`)
|
You will need to ensure your PATH includes go, cmake, gcc and clang mingw32-make to build ollama from source. (typically `C:\msys64\clangarm64\bin\`)
|
||||||
|
|
||||||
|
|
||||||
|
## Transition to Go runner
|
||||||
|
|
||||||
|
The Ollama team is working on moving to a new Go based runner that loads and runs models in a subprocess to replace the previous code under `ext_server`. During this transition period, this new Go runner is "opt in" at build time, and requires using a different approach to build.
|
||||||
|
|
||||||
|
After the transition to use the Go server exclusively, both `make` and `go generate` will build the Go runner.
|
||||||
|
|
||||||
|
Install required tools:
|
||||||
|
|
||||||
|
- go version 1.22 or higher
|
||||||
|
- gcc version 11.4.0 or higher
|
||||||
|
|
||||||
|
|
||||||
|
### MacOS
|
||||||
|
|
||||||
|
[Download Go](https://go.dev/dl/)
|
||||||
|
|
||||||
|
Optionally enable debugging and more verbose logging:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# At build time
|
||||||
|
export CGO_CFLAGS="-g"
|
||||||
|
|
||||||
|
# At runtime
|
||||||
|
export OLLAMA_DEBUG=1
|
||||||
|
```
|
||||||
|
|
||||||
|
Get the required libraries and build the native LLM code: (Adjust the job count based on your number of processors for a faster build)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
make -C llama -j 5
|
||||||
|
```
|
||||||
|
|
||||||
|
Then build ollama:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go build .
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you can run `ollama`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Xcode 15 warnings
|
||||||
|
|
||||||
|
If you are using Xcode newer than version 14, you may see a warning during `go build` about `ld: warning: ignoring duplicate libraries: '-lobjc'` due to Golang issue https://github.com/golang/go/issues/67799 which can be safely ignored. You can suppress the warning with `export CGO_LDFLAGS="-Wl,-no_warn_duplicate_libraries"`
|
||||||
|
|
||||||
|
### Linux
|
||||||
|
|
||||||
|
#### Linux CUDA (NVIDIA)
|
||||||
|
|
||||||
|
_Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
||||||
|
|
||||||
|
Install `make`, `gcc` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
|
||||||
|
development and runtime packages.
|
||||||
|
|
||||||
|
Typically the build scripts will auto-detect CUDA, however, if your Linux distro
|
||||||
|
or installation approach uses unusual paths, you can specify the location by
|
||||||
|
specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
|
||||||
|
libraries, and `CUDACXX` to the location of the nvcc compiler. You can customize
|
||||||
|
a set of target CUDA architectures by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
|
||||||
|
|
||||||
|
Then generate dependencies: (Adjust the job count based on your number of processors for a faster build)
|
||||||
|
|
||||||
|
```
|
||||||
|
make -C llama -j 5
|
||||||
|
```
|
||||||
|
|
||||||
|
Then build the binary:
|
||||||
|
|
||||||
|
```
|
||||||
|
go build .
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Linux ROCm (AMD)
|
||||||
|
|
||||||
|
_Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
||||||
|
|
||||||
|
Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/) development packages first, as well as `make`, `gcc`, and `golang`.
|
||||||
|
|
||||||
|
Typically the build scripts will auto-detect ROCm, however, if your Linux distro
|
||||||
|
or installation approach uses unusual paths, you can specify the location by
|
||||||
|
specifying an environment variable `ROCM_PATH` to the location of the ROCm
|
||||||
|
install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
|
||||||
|
CLBlast install (typically `/usr/lib/cmake/CLBlast`). You can also customize
|
||||||
|
the AMD GPU targets by setting AMDGPU_TARGETS (e.g. `AMDGPU_TARGETS="gfx1101;gfx1102"`)
|
||||||
|
|
||||||
|
Then generate dependencies: (Adjust the job count based on your number of processors for a faster build)
|
||||||
|
|
||||||
|
```
|
||||||
|
make -C llama -j 5
|
||||||
|
```
|
||||||
|
|
||||||
|
Then build the binary:
|
||||||
|
|
||||||
|
```
|
||||||
|
go build .
|
||||||
|
```
|
||||||
|
|
||||||
|
ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the `render` group, or run as root.
|
||||||
|
|
||||||
|
#### Advanced CPU Settings
|
||||||
|
|
||||||
|
By default, running `make` will compile a few different variations
|
||||||
|
of the LLM library based on common CPU families and vector math capabilities,
|
||||||
|
including a lowest-common-denominator which should run on almost any 64 bit CPU
|
||||||
|
somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
|
||||||
|
load.
|
||||||
|
|
||||||
|
Custom CPU settings are not currently supported in the new Go server build but will be added back after we complete the transition.
|
||||||
|
|
||||||
|
#### Containerized Linux Build
|
||||||
|
|
||||||
|
If you have Docker available, you can build linux binaries with `OLLAMA_NEW_RUNNERS=1 ./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included. The resulting binary is placed in `./dist`
|
||||||
|
|
||||||
|
### Windows
|
||||||
|
|
||||||
|
The following tools are required as a minimal development environment to build CPU inference support.
|
||||||
|
|
||||||
|
- Go version 1.22 or higher
|
||||||
|
- https://go.dev/dl/
|
||||||
|
- Git
|
||||||
|
- https://git-scm.com/download/win
|
||||||
|
- GCC and Make. There are multiple options on how to go about installing these tools on Windows. We have verified the following, but others may work as well:
|
||||||
|
- [MSYS2](https://www.msys2.org/)
|
||||||
|
- After installing, from an MSYS2 terminal, run `pacman -S mingw-w64-ucrt-x86_64-gcc make` to install the required tools
|
||||||
|
- Assuming you used the default install prefix for msys2 above, add `c:\msys64\ucrt64\bin` and `c:\msys64\usr\bin` to your environment variable `PATH` where you will perform the build steps below (e.g. system-wide, account-level, powershell, cmd, etc.)
|
||||||
|
|
||||||
|
Then, build the `ollama` binary:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
$env:CGO_ENABLED="1"
|
||||||
|
make -C llama -j 8
|
||||||
|
go build .
|
||||||
|
```
|
||||||
|
|
||||||
|
#### GPU Support
|
||||||
|
|
||||||
|
The GPU tools require the Microsoft native build tools. To build either CUDA or ROCm, you must first install MSVC via Visual Studio:
|
||||||
|
|
||||||
|
- Make sure to select `Desktop development with C++` as a Workload during the Visual Studio install
|
||||||
|
- You must complete the Visual Studio install and run it once **BEFORE** installing CUDA or ROCm for the tools to properly register
|
||||||
|
- Add the location of the **64 bit (x64)** compiler (`cl.exe`) to your `PATH`
|
||||||
|
- Note: the default Developer Shell may configure the 32 bit (x86) compiler which will lead to build failures. Ollama requires a 64 bit toolchain.
|
||||||
|
|
||||||
|
#### Windows CUDA (NVIDIA)
|
||||||
|
|
||||||
|
In addition to the common Windows development tools and MSVC described above:
|
||||||
|
|
||||||
|
- [NVIDIA CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)
|
||||||
|
|
||||||
|
#### Windows ROCm (AMD Radeon)
|
||||||
|
|
||||||
|
In addition to the common Windows development tools and MSVC described above:
|
||||||
|
|
||||||
|
- [AMD HIP](https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html)
|
||||||
|
|
||||||
|
#### Windows arm64
|
||||||
|
|
||||||
|
The default `Developer PowerShell for VS 2022` may default to x86 which is not what you want. To ensure you get an arm64 development environment, start a plain PowerShell terminal and run:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
import-module 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'
|
||||||
|
Enter-VsDevShell -Arch arm64 -vsinstallpath 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community' -skipautomaticlocation
|
||||||
|
```
|
||||||
|
|
||||||
|
You can confirm with `write-host $env:VSCMD_ARG_TGT_ARCH`
|
||||||
|
|
||||||
|
Follow the instructions at https://www.msys2.org/wiki/arm64/ to set up an arm64 msys2 environment. Ollama requires gcc and mingw32-make to compile, which is not currently available on Windows arm64, but a gcc compatibility adapter is available via `mingw-w64-clang-aarch64-gcc-compat`. At a minimum you will need to install the following:
|
||||||
|
|
||||||
|
```
|
||||||
|
pacman -S mingw-w64-clang-aarch64-clang mingw-w64-clang-aarch64-gcc-compat mingw-w64-clang-aarch64-make make
|
||||||
|
```
|
||||||
|
|
||||||
|
You will need to ensure your PATH includes go, cmake, gcc and clang mingw32-make to build ollama from source. (typically `C:\msys64\clangarm64\bin\`)
|
||||||
|
|
|
@ -160,6 +160,8 @@ var (
|
||||||
SchedSpread = Bool("OLLAMA_SCHED_SPREAD")
|
SchedSpread = Bool("OLLAMA_SCHED_SPREAD")
|
||||||
// IntelGPU enables experimental Intel GPU detection.
|
// IntelGPU enables experimental Intel GPU detection.
|
||||||
IntelGPU = Bool("OLLAMA_INTEL_GPU")
|
IntelGPU = Bool("OLLAMA_INTEL_GPU")
|
||||||
|
// MultiUserCache optimizes prompt caching for multi-user scenarios
|
||||||
|
MultiUserCache = Bool("OLLAMA_MULTIUSER_CACHE")
|
||||||
)
|
)
|
||||||
|
|
||||||
func String(s string) func() string {
|
func String(s string) func() string {
|
||||||
|
@ -245,6 +247,7 @@ func AsMap() map[string]EnvVar {
|
||||||
"OLLAMA_ORIGINS": {"OLLAMA_ORIGINS", Origins(), "A comma separated list of allowed origins"},
|
"OLLAMA_ORIGINS": {"OLLAMA_ORIGINS", Origins(), "A comma separated list of allowed origins"},
|
||||||
"OLLAMA_SCHED_SPREAD": {"OLLAMA_SCHED_SPREAD", SchedSpread(), "Always schedule model across all GPUs"},
|
"OLLAMA_SCHED_SPREAD": {"OLLAMA_SCHED_SPREAD", SchedSpread(), "Always schedule model across all GPUs"},
|
||||||
"OLLAMA_TMPDIR": {"OLLAMA_TMPDIR", TmpDir(), "Location for temporary files"},
|
"OLLAMA_TMPDIR": {"OLLAMA_TMPDIR", TmpDir(), "Location for temporary files"},
|
||||||
|
"OLLAMA_MULTIUSER_CACHE": {"OLLAMA_MULTIUSER_CACHE", MultiUserCache(), "Optimize prompt caching for multi-user scenarios"},
|
||||||
|
|
||||||
// Informational
|
// Informational
|
||||||
"HTTP_PROXY": {"HTTP_PROXY", String("HTTP_PROXY")(), "HTTP proxy"},
|
"HTTP_PROXY": {"HTTP_PROXY", String("HTTP_PROXY")(), "HTTP proxy"},
|
||||||
|
|
|
@ -42,7 +42,7 @@ func TestMultiModelConcurrency(t *testing.T) {
|
||||||
}
|
}
|
||||||
resp = [2][]string{
|
resp = [2][]string{
|
||||||
{"sunlight"},
|
{"sunlight"},
|
||||||
{"england", "english", "massachusetts", "pilgrims", "british"},
|
{"england", "english", "massachusetts", "pilgrims", "british", "festival"},
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
|
@ -275,7 +275,7 @@ func DoGenerate(ctx context.Context, t *testing.T, client *api.Client, genReq ap
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
require.True(t, atLeastOne, "none of %v found in %s", anyResp, response)
|
require.True(t, atLeastOne, "%s: none of %v found in %s", genReq.Model, anyResp, response)
|
||||||
slog.Info("test pass", "model", genReq.Model, "prompt", genReq.Prompt, "contains", anyResp, "response", response)
|
slog.Info("test pass", "model", genReq.Model, "prompt", genReq.Prompt, "contains", anyResp, "response", response)
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
t.Error("outer test context done while waiting for generate")
|
t.Error("outer test context done while waiting for generate")
|
||||||
|
|
3
llama/.gitignore
vendored
Normal file
3
llama/.gitignore
vendored
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
*.bin
|
||||||
|
*.gguf
|
||||||
|
build/
|
221
llama/Dockerfile
Normal file
221
llama/Dockerfile
Normal file
|
@ -0,0 +1,221 @@
|
||||||
|
# Note: once we have fully transitioned to the Go server, this will replace the old Dockerfile at the top of the tree
|
||||||
|
ARG GOLANG_VERSION=1.22.5
|
||||||
|
ARG CMAKE_VERSION=3.22.1
|
||||||
|
ARG CUDA_VERSION_11=11.3.1
|
||||||
|
ARG CUDA_V11_ARCHITECTURES="50;52;53;60;61;62;70;72;75;80;86"
|
||||||
|
ARG CUDA_VERSION_12=12.4.0
|
||||||
|
ARG CUDA_V12_ARCHITECTURES="60;61;62;70;72;75;80;86;87;89;90;90a"
|
||||||
|
ARG ROCM_VERSION=6.1.2
|
||||||
|
|
||||||
|
### To create a local image for building linux binaries on mac or windows with efficient incremental builds
|
||||||
|
#
|
||||||
|
# docker build --platform linux/amd64 -t builder-amd64 -f Dockerfile.new --target unified-builder-amd64 .
|
||||||
|
# docker run --platform linux/amd64 --rm -it -v $(pwd):/go/src/github.com/ollama/ollama/ builder-amd64
|
||||||
|
#
|
||||||
|
### Then incremental builds will be much faster in this container
|
||||||
|
#
|
||||||
|
# make -C llama -j 10 && go build -trimpath -o dist/linux-amd64/ollama .
|
||||||
|
#
|
||||||
|
FROM --platform=linux/amd64 rocm/dev-centos-7:${ROCM_VERSION}-complete AS unified-builder-amd64
|
||||||
|
ARG CMAKE_VERSION
|
||||||
|
ARG GOLANG_VERSION
|
||||||
|
ARG CUDA_VERSION_11
|
||||||
|
ARG CUDA_VERSION_12
|
||||||
|
COPY ./scripts/rh_linux_deps.sh /
|
||||||
|
ENV PATH /opt/rh/devtoolset-10/root/usr/bin:/usr/local/cuda/bin:$PATH
|
||||||
|
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
|
||||||
|
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:/opt/amdgpu/lib64
|
||||||
|
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
||||||
|
RUN yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo && \
|
||||||
|
dnf clean all && \
|
||||||
|
dnf install -y \
|
||||||
|
zsh \
|
||||||
|
cuda-$(echo ${CUDA_VERSION_11} | cut -f1-2 -d. | sed -e "s/\./-/g") \
|
||||||
|
cuda-$(echo ${CUDA_VERSION_12} | cut -f1-2 -d. | sed -e "s/\./-/g")
|
||||||
|
# TODO intel oneapi goes here...
|
||||||
|
ENV GOARCH amd64
|
||||||
|
ENV CGO_ENABLED 1
|
||||||
|
WORKDIR /go/src/github.com/ollama/ollama/
|
||||||
|
ENTRYPOINT [ "zsh" ]
|
||||||
|
|
||||||
|
### To create a local image for building linux binaries on mac or linux/arm64 with efficient incremental builds
|
||||||
|
# Note: this does not contain jetson variants
|
||||||
|
#
|
||||||
|
# docker build --platform linux/arm64 -t builder-arm64 -f Dockerfile.new --target unified-builder-arm64 .
|
||||||
|
# docker run --platform linux/arm64 --rm -it -v $(pwd):/go/src/github.com/ollama/ollama/ builder-arm64
|
||||||
|
#
|
||||||
|
FROM --platform=linux/arm64 rockylinux:8 AS unified-builder-arm64
|
||||||
|
ARG CMAKE_VERSION
|
||||||
|
ARG GOLANG_VERSION
|
||||||
|
ARG CUDA_VERSION_11
|
||||||
|
ARG CUDA_VERSION_12
|
||||||
|
COPY ./scripts/rh_linux_deps.sh /
|
||||||
|
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
||||||
|
RUN yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo && \
|
||||||
|
dnf config-manager --set-enabled appstream && \
|
||||||
|
dnf clean all && \
|
||||||
|
dnf install -y \
|
||||||
|
zsh \
|
||||||
|
cuda-toolkit-$(echo ${CUDA_VERSION_11} | cut -f1-2 -d. | sed -e "s/\./-/g") \
|
||||||
|
cuda-toolkit-$(echo ${CUDA_VERSION_12} | cut -f1-2 -d. | sed -e "s/\./-/g")
|
||||||
|
ENV PATH /opt/rh/gcc-toolset-10/root/usr/bin:$PATH:/usr/local/cuda/bin
|
||||||
|
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
|
||||||
|
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:/opt/amdgpu/lib64
|
||||||
|
ENV GOARCH amd64
|
||||||
|
ENV CGO_ENABLED 1
|
||||||
|
WORKDIR /go/src/github.com/ollama/ollama/
|
||||||
|
ENTRYPOINT [ "zsh" ]
|
||||||
|
|
||||||
|
FROM --platform=linux/amd64 unified-builder-amd64 AS runners-amd64
|
||||||
|
COPY . .
|
||||||
|
ARG OLLAMA_SKIP_CUDA_GENERATE
|
||||||
|
ARG OLLAMA_SKIP_CUDA_11_GENERATE
|
||||||
|
ARG OLLAMA_SKIP_CUDA_12_GENERATE
|
||||||
|
ARG OLLAMA_SKIP_ROCM_GENERATE
|
||||||
|
ARG CUDA_V11_ARCHITECTURES
|
||||||
|
ARG CUDA_V12_ARCHITECTURES
|
||||||
|
ARG OLLAMA_FAST_BUILD
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
if grep "^flags" /proc/cpuinfo|grep avx>/dev/null; then \
|
||||||
|
make -C llama -j $(expr $(nproc) / 2 ) ; \
|
||||||
|
else \
|
||||||
|
make -C llama -j 5 ; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 unified-builder-arm64 AS runners-arm64
|
||||||
|
COPY . .
|
||||||
|
ARG OLLAMA_SKIP_CUDA_GENERATE
|
||||||
|
ARG OLLAMA_SKIP_CUDA_11_GENERATE
|
||||||
|
ARG OLLAMA_SKIP_CUDA_12_GENERATE
|
||||||
|
ARG CUDA_V11_ARCHITECTURES
|
||||||
|
ARG CUDA_V12_ARCHITECTURES
|
||||||
|
ARG OLLAMA_FAST_BUILD
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
make -C llama -j 8
|
||||||
|
|
||||||
|
|
||||||
|
# Intermediate stages used for ./scripts/build_linux.sh
|
||||||
|
FROM --platform=linux/amd64 centos:7 AS builder-amd64
|
||||||
|
ARG CMAKE_VERSION
|
||||||
|
ARG GOLANG_VERSION
|
||||||
|
COPY ./scripts/rh_linux_deps.sh /
|
||||||
|
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
||||||
|
ENV PATH /opt/rh/devtoolset-10/root/usr/bin:$PATH
|
||||||
|
ENV CGO_ENABLED 1
|
||||||
|
ENV GOARCH amd64
|
||||||
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
|
|
||||||
|
FROM --platform=linux/amd64 builder-amd64 AS build-amd64
|
||||||
|
COPY . .
|
||||||
|
COPY --from=runners-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
||||||
|
COPY --from=runners-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
||||||
|
ARG GOFLAGS
|
||||||
|
ARG CGO_CFLAGS
|
||||||
|
ARG OLLAMA_SKIP_ROCM_GENERATE
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
||||||
|
RUN cd dist/linux-$GOARCH && \
|
||||||
|
tar --exclude runners -cf - . | pigz --best > ../ollama-linux-$GOARCH.tgz
|
||||||
|
RUN if [ -z ${OLLAMA_SKIP_ROCM_GENERATE} ] ; then \
|
||||||
|
cd dist/linux-$GOARCH-rocm && \
|
||||||
|
tar -cf - . | pigz --best > ../ollama-linux-$GOARCH-rocm.tgz ;\
|
||||||
|
fi
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 rockylinux:8 AS builder-arm64
|
||||||
|
ARG CMAKE_VERSION
|
||||||
|
ARG GOLANG_VERSION
|
||||||
|
COPY ./scripts/rh_linux_deps.sh /
|
||||||
|
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
||||||
|
ENV PATH /opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
||||||
|
ENV CGO_ENABLED 1
|
||||||
|
ENV GOARCH arm64
|
||||||
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 builder-arm64 AS build-arm64
|
||||||
|
COPY . .
|
||||||
|
COPY --from=runners-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
||||||
|
COPY --from=runners-arm64 /go/src/github.com/ollama/ollama/build/ build/
|
||||||
|
ARG GOFLAGS
|
||||||
|
ARG CGO_CFLAGS
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
go build -trimpath -o dist/linux-arm64/bin/ollama .
|
||||||
|
RUN cd dist/linux-$GOARCH && \
|
||||||
|
tar --exclude runners -cf - . | pigz --best > ../ollama-linux-$GOARCH.tgz
|
||||||
|
|
||||||
|
FROM --platform=linux/amd64 scratch AS dist-amd64
|
||||||
|
COPY --from=build-amd64 /go/src/github.com/ollama/ollama/dist/ollama-linux-*.tgz /
|
||||||
|
FROM --platform=linux/arm64 scratch AS dist-arm64
|
||||||
|
COPY --from=build-arm64 /go/src/github.com/ollama/ollama/dist/ollama-linux-*.tgz /
|
||||||
|
FROM dist-$TARGETARCH AS dist
|
||||||
|
|
||||||
|
|
||||||
|
# Optimized container images do not cary nested payloads
|
||||||
|
FROM --platform=linux/amd64 builder-amd64 AS container-build-amd64
|
||||||
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
|
COPY . .
|
||||||
|
ARG GOFLAGS
|
||||||
|
ARG CGO_CFLAGS
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 builder-arm64 AS container-build-arm64
|
||||||
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
|
COPY . .
|
||||||
|
ARG GOFLAGS
|
||||||
|
ARG CGO_CFLAGS
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
go build -trimpath -o dist/linux-arm64/bin/ollama .
|
||||||
|
|
||||||
|
# For amd64 container images, filter out cuda/rocm to minimize size
|
||||||
|
FROM runners-amd64 AS runners-cuda-amd64
|
||||||
|
RUN rm -rf \
|
||||||
|
./dist/linux-amd64/lib/ollama/libggml_hipblas.so \
|
||||||
|
./dist/linux-amd64/lib/ollama/runners/rocm*
|
||||||
|
|
||||||
|
FROM runners-amd64 AS runners-rocm-amd64
|
||||||
|
RUN rm -rf \
|
||||||
|
./dist/linux-amd64/lib/ollama/libggml_cuda*.so \
|
||||||
|
./dist/linux-amd64/lib/ollama/libcu*.so* \
|
||||||
|
./dist/linux-amd64/lib/ollama/runners/cuda*
|
||||||
|
|
||||||
|
FROM --platform=linux/amd64 ubuntu:22.04 AS runtime-amd64
|
||||||
|
RUN apt-get update && \
|
||||||
|
apt-get install -y ca-certificates && \
|
||||||
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=container-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/bin/ /bin/
|
||||||
|
COPY --from=runners-cuda-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 ubuntu:22.04 AS runtime-arm64
|
||||||
|
RUN apt-get update && \
|
||||||
|
apt-get install -y ca-certificates && \
|
||||||
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=container-build-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/bin/ /bin/
|
||||||
|
COPY --from=runners-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/lib/ /lib/
|
||||||
|
|
||||||
|
# ROCm libraries larger so we keep it distinct from the CPU/CUDA image
|
||||||
|
FROM --platform=linux/amd64 ubuntu:22.04 AS runtime-rocm
|
||||||
|
# Frontload the rocm libraries which are large, and rarely change to increase chance of a common layer
|
||||||
|
# across releases
|
||||||
|
COPY --from=build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ /lib/
|
||||||
|
RUN apt-get update && \
|
||||||
|
apt-get install -y ca-certificates && \
|
||||||
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=container-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/bin/ /bin/
|
||||||
|
COPY --from=runners-rocm-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
||||||
|
|
||||||
|
EXPOSE 11434
|
||||||
|
ENV OLLAMA_HOST 0.0.0.0
|
||||||
|
|
||||||
|
ENTRYPOINT ["/bin/ollama"]
|
||||||
|
CMD ["serve"]
|
||||||
|
|
||||||
|
FROM runtime-$TARGETARCH
|
||||||
|
EXPOSE 11434
|
||||||
|
ENV OLLAMA_HOST 0.0.0.0
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||||
|
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
||||||
|
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
||||||
|
ENV NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
|
||||||
|
ENTRYPOINT ["/bin/ollama"]
|
||||||
|
CMD ["serve"]
|
54
llama/Makefile
Normal file
54
llama/Makefile
Normal file
|
@ -0,0 +1,54 @@
|
||||||
|
# top level makefile for Go server
|
||||||
|
include make/common-defs.make
|
||||||
|
|
||||||
|
RUNNER_TARGETS := default
|
||||||
|
|
||||||
|
# Determine which if any GPU runners we should build
|
||||||
|
ifeq ($(OS),windows)
|
||||||
|
CUDA_PATH?=$(shell cygpath -m -s "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\" 2>/dev/null)unknown
|
||||||
|
CUDA_BASE_DIR := $(dir $(shell cygpath -m -s "$(CUDA_PATH)\\.." 2>/dev/null))
|
||||||
|
CUDA_11:=$(shell ls -d $(CUDA_BASE_DIR)/v11.? 2>/dev/null)
|
||||||
|
CUDA_12:=$(shell ls -d $(CUDA_BASE_DIR)/v12.? 2>/dev/null)
|
||||||
|
HIP_PATH_83 := $(shell cygpath -m -s "$(subst \,/,$(HIP_PATH))" 2>/dev/null)
|
||||||
|
HIP_LIB_DIR := $(shell ls -d $(HIP_PATH_83)/lib 2>/dev/null)
|
||||||
|
else ifeq ($(OS),linux)
|
||||||
|
HIP_PATH?=/opt/rocm
|
||||||
|
HIP_LIB_DIR := $(shell ls -d $(HIP_PATH)/lib 2>/dev/null)
|
||||||
|
CUDA_PATH?=/usr/local/cuda
|
||||||
|
CUDA_11:=$(shell ls -d $(CUDA_PATH)-11 2>/dev/null)
|
||||||
|
CUDA_12:=$(shell ls -d $(CUDA_PATH)-12 2>/dev/null)
|
||||||
|
endif
|
||||||
|
|
||||||
|
ifeq ($(OLLAMA_SKIP_CUDA_GENERATE),)
|
||||||
|
ifneq ($(CUDA_11),)
|
||||||
|
RUNNER_TARGETS += cuda_v11
|
||||||
|
endif
|
||||||
|
ifneq ($(CUDA_12),)
|
||||||
|
RUNNER_TARGETS += cuda_v12
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
ifeq ($(OLLAMA_SKIP_ROCM_GENERATE),)
|
||||||
|
ifneq ($(HIP_LIB_DIR),)
|
||||||
|
RUNNER_TARGETS += rocm
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
|
||||||
|
|
||||||
|
all: clean-payload .WAIT runners
|
||||||
|
|
||||||
|
runners: $(RUNNER_TARGETS)
|
||||||
|
|
||||||
|
$(RUNNER_TARGETS):
|
||||||
|
$(MAKE) -f make/Makefile.$@
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -rf $(BUILD_DIR) $(DIST_RUNNERS) $(PAYLOAD_RUNNERS) $(RUNNERS_PAYLOAD_DIR)
|
||||||
|
|
||||||
|
clean-payload:
|
||||||
|
rm -rf $(addprefix $(RUNNERS_PAYLOAD_DIR)/, $(RUNNER_TARGETS) metal cpu cpu_avx cpu_avx2)
|
||||||
|
|
||||||
|
.PHONY: all runners clean clean-payload $(RUNNER_TARGETS) .WAIT
|
||||||
|
|
||||||
|
# Handy debugging for make variables
|
||||||
|
print-%:
|
||||||
|
@echo '$*=$($*)'
|
100
llama/README.md
Normal file
100
llama/README.md
Normal file
|
@ -0,0 +1,100 @@
|
||||||
|
# `llama`
|
||||||
|
|
||||||
|
This package integrates the [llama.cpp](https://github.com/ggerganov/llama.cpp) library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.
|
||||||
|
|
||||||
|
Supported:
|
||||||
|
|
||||||
|
- [x] CPU
|
||||||
|
- [x] avx, avx2
|
||||||
|
- [x] macOS Metal
|
||||||
|
- [x] Windows CUDA
|
||||||
|
- [x] Windows ROCm
|
||||||
|
- [x] Linux CUDA
|
||||||
|
- [x] Linux ROCm
|
||||||
|
- [x] Llava
|
||||||
|
|
||||||
|
Extra build steps are required for CUDA and ROCm on Windows since `nvcc` and `hipcc` both require using msvc as the host compiler. For these shared libraries are created:
|
||||||
|
|
||||||
|
- `ggml_cuda.dll` on Windows or `ggml_cuda.so` on Linux
|
||||||
|
- `ggml_hipblas.dll` on Windows or `ggml_hipblas.so` on Linux
|
||||||
|
|
||||||
|
> Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.
|
||||||
|
|
||||||
|
## Building
|
||||||
|
|
||||||
|
```
|
||||||
|
go build .
|
||||||
|
```
|
||||||
|
|
||||||
|
### AVX
|
||||||
|
|
||||||
|
```shell
|
||||||
|
go build -tags avx .
|
||||||
|
```
|
||||||
|
|
||||||
|
### AVX2
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# go doesn't recognize `-mfma` as a valid compiler flag
|
||||||
|
# see https://github.com/golang/go/issues/17895
|
||||||
|
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
|
||||||
|
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
|
||||||
|
go build -tags=avx,avx2 .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Linux
|
||||||
|
|
||||||
|
### CUDA
|
||||||
|
|
||||||
|
Install the [CUDA toolkit v11.3.1](https://developer.nvidia.com/cuda-11-3-1-download-archive):
|
||||||
|
|
||||||
|
```shell
|
||||||
|
make ggml_cuda.so
|
||||||
|
go build -tags avx,cuda .
|
||||||
|
```
|
||||||
|
|
||||||
|
### ROCm
|
||||||
|
|
||||||
|
Install the [CUDA toolkit v11.3.1](https://developer.nvidia.com/cuda-11-3-1-download-archive):
|
||||||
|
|
||||||
|
```shell
|
||||||
|
make ggml_hipblas.so
|
||||||
|
go build -tags avx,rocm .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Windows
|
||||||
|
|
||||||
|
Download [w64devkit](https://github.com/skeeto/w64devkit/releases/latest) for a simple MinGW development environment.
|
||||||
|
|
||||||
|
### CUDA
|
||||||
|
|
||||||
|
Install the [CUDA toolkit v11.3.1](https://developer.nvidia.com/cuda-11-3-1-download-archive) then build the cuda code:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
make ggml_cuda.dll
|
||||||
|
go build -tags avx,cuda .
|
||||||
|
```
|
||||||
|
|
||||||
|
### ROCm
|
||||||
|
|
||||||
|
Install [ROCm 5.7.1](https://rocm.docs.amd.com/en/docs-5.7.1/).
|
||||||
|
|
||||||
|
```shell
|
||||||
|
make ggml_hipblas.dll
|
||||||
|
go build -tags avx,rocm .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Building runners
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# build all runners for this platform
|
||||||
|
make -j
|
||||||
|
```
|
||||||
|
|
||||||
|
## Syncing with llama.cpp
|
||||||
|
|
||||||
|
To update this package to the latest llama.cpp code, use the `sync.sh` script:
|
||||||
|
|
||||||
|
```
|
||||||
|
./sync.sh ../../llama.cpp
|
||||||
|
```
|
392
llama/base64.hpp
Normal file
392
llama/base64.hpp
Normal file
|
@ -0,0 +1,392 @@
|
||||||
|
/*
|
||||||
|
This is free and unencumbered software released into the public domain.
|
||||||
|
|
||||||
|
Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||||
|
distribute this software, either in source code form or as a compiled
|
||||||
|
binary, for any purpose, commercial or non-commercial, and by any
|
||||||
|
means.
|
||||||
|
|
||||||
|
In jurisdictions that recognize copyright laws, the author or authors
|
||||||
|
of this software dedicate any and all copyright interest in the
|
||||||
|
software to the public domain. We make this dedication for the benefit
|
||||||
|
of the public at large and to the detriment of our heirs and
|
||||||
|
successors. We intend this dedication to be an overt act of
|
||||||
|
relinquishment in perpetuity of all present and future rights to this
|
||||||
|
software under copyright law.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||||
|
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||||
|
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||||
|
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||||
|
OTHER DEALINGS IN THE SOFTWARE.
|
||||||
|
|
||||||
|
For more information, please refer to <http://unlicense.org>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef PUBLIC_DOMAIN_BASE64_HPP_
|
||||||
|
#define PUBLIC_DOMAIN_BASE64_HPP_
|
||||||
|
|
||||||
|
#include <cstdint>
|
||||||
|
#include <iterator>
|
||||||
|
#include <stdexcept>
|
||||||
|
#include <string>
|
||||||
|
|
||||||
|
class base64_error : public std::runtime_error
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
using std::runtime_error::runtime_error;
|
||||||
|
};
|
||||||
|
|
||||||
|
class base64
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
enum class alphabet
|
||||||
|
{
|
||||||
|
/** the alphabet is detected automatically */
|
||||||
|
auto_,
|
||||||
|
/** the standard base64 alphabet is used */
|
||||||
|
standard,
|
||||||
|
/** like `standard` except that the characters `+` and `/` are replaced by `-` and `_` respectively*/
|
||||||
|
url_filename_safe
|
||||||
|
};
|
||||||
|
|
||||||
|
enum class decoding_behavior
|
||||||
|
{
|
||||||
|
/** if the input is not padded, the remaining bits are ignored */
|
||||||
|
moderate,
|
||||||
|
/** if a padding character is encounter decoding is finished */
|
||||||
|
loose
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
Encodes all the elements from `in_begin` to `in_end` to `out`.
|
||||||
|
|
||||||
|
@warning The source and destination cannot overlap. The destination must be able to hold at least
|
||||||
|
`required_encode_size(std::distance(in_begin, in_end))`, otherwise the behavior depends on the output iterator.
|
||||||
|
|
||||||
|
@tparam Input_iterator the source; the returned elements are cast to `std::uint8_t` and should not be greater than
|
||||||
|
8 bits
|
||||||
|
@tparam Output_iterator the destination; the elements written to it are from the type `char`
|
||||||
|
@param in_begin the beginning of the source
|
||||||
|
@param in_end the ending of the source
|
||||||
|
@param out the destination iterator
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@returns the iterator to the next element past the last element copied
|
||||||
|
@throws see `Input_iterator` and `Output_iterator`
|
||||||
|
*/
|
||||||
|
template<typename Input_iterator, typename Output_iterator>
|
||||||
|
static Output_iterator encode(Input_iterator in_begin, Input_iterator in_end, Output_iterator out,
|
||||||
|
alphabet alphabet = alphabet::standard)
|
||||||
|
{
|
||||||
|
constexpr auto pad = '=';
|
||||||
|
const char* alpha = alphabet == alphabet::url_filename_safe
|
||||||
|
? "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"
|
||||||
|
: "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
|
||||||
|
|
||||||
|
while (in_begin != in_end) {
|
||||||
|
std::uint8_t i0 = 0, i1 = 0, i2 = 0;
|
||||||
|
|
||||||
|
// first character
|
||||||
|
i0 = static_cast<std::uint8_t>(*in_begin);
|
||||||
|
++in_begin;
|
||||||
|
|
||||||
|
*out = alpha[i0 >> 2 & 0x3f];
|
||||||
|
++out;
|
||||||
|
|
||||||
|
// part of first character and second
|
||||||
|
if (in_begin != in_end) {
|
||||||
|
i1 = static_cast<std::uint8_t>(*in_begin);
|
||||||
|
++in_begin;
|
||||||
|
|
||||||
|
*out = alpha[((i0 & 0x3) << 4) | (i1 >> 4 & 0x0f)];
|
||||||
|
++out;
|
||||||
|
} else {
|
||||||
|
*out = alpha[(i0 & 0x3) << 4];
|
||||||
|
++out;
|
||||||
|
|
||||||
|
// last padding
|
||||||
|
*out = pad;
|
||||||
|
++out;
|
||||||
|
|
||||||
|
// last padding
|
||||||
|
*out = pad;
|
||||||
|
++out;
|
||||||
|
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// part of second character and third
|
||||||
|
if (in_begin != in_end) {
|
||||||
|
i2 = static_cast<std::uint8_t>(*in_begin);
|
||||||
|
++in_begin;
|
||||||
|
|
||||||
|
*out = alpha[((i1 & 0xf) << 2) | (i2 >> 6 & 0x03)];
|
||||||
|
++out;
|
||||||
|
} else {
|
||||||
|
*out = alpha[(i1 & 0xf) << 2];
|
||||||
|
++out;
|
||||||
|
|
||||||
|
// last padding
|
||||||
|
*out = pad;
|
||||||
|
++out;
|
||||||
|
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// rest of third
|
||||||
|
*out = alpha[i2 & 0x3f];
|
||||||
|
++out;
|
||||||
|
}
|
||||||
|
|
||||||
|
return out;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Encodes a string.
|
||||||
|
|
||||||
|
@param str the string that should be encoded
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@returns the encoded base64 string
|
||||||
|
@throws see base64::encode()
|
||||||
|
*/
|
||||||
|
static std::string encode(const std::string& str, alphabet alphabet = alphabet::standard)
|
||||||
|
{
|
||||||
|
std::string result;
|
||||||
|
|
||||||
|
result.reserve(required_encode_size(str.length()) + 1);
|
||||||
|
|
||||||
|
encode(str.begin(), str.end(), std::back_inserter(result), alphabet);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Encodes a char array.
|
||||||
|
|
||||||
|
@param buffer the char array
|
||||||
|
@param size the size of the array
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@returns the encoded string
|
||||||
|
*/
|
||||||
|
static std::string encode(const char* buffer, std::size_t size, alphabet alphabet = alphabet::standard)
|
||||||
|
{
|
||||||
|
std::string result;
|
||||||
|
|
||||||
|
result.reserve(required_encode_size(size) + 1);
|
||||||
|
|
||||||
|
encode(buffer, buffer + size, std::back_inserter(result), alphabet);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Decodes all the elements from `in_begin` to `in_end` to `out`. `in_begin` may point to the same location as `out`,
|
||||||
|
in other words: inplace decoding is possible.
|
||||||
|
|
||||||
|
@warning The destination must be able to hold at least `required_decode_size(std::distance(in_begin, in_end))`,
|
||||||
|
otherwise the behavior depends on the output iterator.
|
||||||
|
|
||||||
|
@tparam Input_iterator the source; the returned elements are cast to `char`
|
||||||
|
@tparam Output_iterator the destination; the elements written to it are from the type `std::uint8_t`
|
||||||
|
@param in_begin the beginning of the source
|
||||||
|
@param in_end the ending of the source
|
||||||
|
@param out the destination iterator
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@param behavior the behavior when an error was detected
|
||||||
|
@returns the iterator to the next element past the last element copied
|
||||||
|
@throws base64_error depending on the set behavior
|
||||||
|
@throws see `Input_iterator` and `Output_iterator`
|
||||||
|
*/
|
||||||
|
template<typename Input_iterator, typename Output_iterator>
|
||||||
|
static Output_iterator decode(Input_iterator in_begin, Input_iterator in_end, Output_iterator out,
|
||||||
|
alphabet alphabet = alphabet::auto_,
|
||||||
|
decoding_behavior behavior = decoding_behavior::moderate)
|
||||||
|
{
|
||||||
|
//constexpr auto pad = '=';
|
||||||
|
std::uint8_t last = 0;
|
||||||
|
auto bits = 0;
|
||||||
|
|
||||||
|
while (in_begin != in_end) {
|
||||||
|
auto c = *in_begin;
|
||||||
|
++in_begin;
|
||||||
|
|
||||||
|
if (c == '=') {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto part = _base64_value(alphabet, c);
|
||||||
|
|
||||||
|
// enough bits for one byte
|
||||||
|
if (bits + 6 >= 8) {
|
||||||
|
*out = (last << (8 - bits)) | (part >> (bits - 2));
|
||||||
|
++out;
|
||||||
|
|
||||||
|
bits -= 2;
|
||||||
|
} else {
|
||||||
|
bits += 6;
|
||||||
|
}
|
||||||
|
|
||||||
|
last = part;
|
||||||
|
}
|
||||||
|
|
||||||
|
// check padding
|
||||||
|
if (behavior != decoding_behavior::loose) {
|
||||||
|
while (in_begin != in_end) {
|
||||||
|
auto c = *in_begin;
|
||||||
|
++in_begin;
|
||||||
|
|
||||||
|
if (c != '=') {
|
||||||
|
throw base64_error("invalid base64 character.");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Decodes a string.
|
||||||
|
|
||||||
|
@param str the base64 encoded string
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@param behavior the behavior when an error was detected
|
||||||
|
@returns the decoded string
|
||||||
|
@throws see base64::decode()
|
||||||
|
*/
|
||||||
|
static std::string decode(const std::string& str, alphabet alphabet = alphabet::auto_,
|
||||||
|
decoding_behavior behavior = decoding_behavior::moderate)
|
||||||
|
{
|
||||||
|
std::string result;
|
||||||
|
|
||||||
|
result.reserve(max_decode_size(str.length()));
|
||||||
|
|
||||||
|
decode(str.begin(), str.end(), std::back_inserter(result), alphabet, behavior);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Decodes a string.
|
||||||
|
|
||||||
|
@param buffer the base64 encoded buffer
|
||||||
|
@param size the size of the buffer
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@param behavior the behavior when an error was detected
|
||||||
|
@returns the decoded string
|
||||||
|
@throws see base64::decode()
|
||||||
|
*/
|
||||||
|
static std::string decode(const char* buffer, std::size_t size, alphabet alphabet = alphabet::auto_,
|
||||||
|
decoding_behavior behavior = decoding_behavior::moderate)
|
||||||
|
{
|
||||||
|
std::string result;
|
||||||
|
|
||||||
|
result.reserve(max_decode_size(size));
|
||||||
|
|
||||||
|
decode(buffer, buffer + size, std::back_inserter(result), alphabet, behavior);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Decodes a string inplace.
|
||||||
|
|
||||||
|
@param[in,out] str the base64 encoded string
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@param behavior the behavior when an error was detected
|
||||||
|
@throws base64::decode_inplace()
|
||||||
|
*/
|
||||||
|
static void decode_inplace(std::string& str, alphabet alphabet = alphabet::auto_,
|
||||||
|
decoding_behavior behavior = decoding_behavior::moderate)
|
||||||
|
{
|
||||||
|
str.resize(decode(str.begin(), str.end(), str.begin(), alphabet, behavior) - str.begin());
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Decodes a char array inplace.
|
||||||
|
|
||||||
|
@param[in,out] str the string array
|
||||||
|
@param size the length of the array
|
||||||
|
@param alphabet which alphabet should be used
|
||||||
|
@param behavior the behavior when an error was detected
|
||||||
|
@returns the pointer to the next element past the last element decoded
|
||||||
|
@throws base64::decode_inplace()
|
||||||
|
*/
|
||||||
|
static char* decode_inplace(char* str, std::size_t size, alphabet alphabet = alphabet::auto_,
|
||||||
|
decoding_behavior behavior = decoding_behavior::moderate)
|
||||||
|
{
|
||||||
|
return decode(str, str + size, str, alphabet, behavior);
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Returns the required decoding size for a given size. The value is calculated with the following formula:
|
||||||
|
|
||||||
|
$$
|
||||||
|
\lceil \frac{size}{4} \rceil \cdot 3
|
||||||
|
$$
|
||||||
|
|
||||||
|
@param size the size of the encoded input
|
||||||
|
@returns the size of the resulting decoded buffer; this the absolute maximum
|
||||||
|
*/
|
||||||
|
static std::size_t max_decode_size(std::size_t size) noexcept
|
||||||
|
{
|
||||||
|
return (size / 4 + (size % 4 ? 1 : 0)) * 3;
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
Returns the required encoding size for a given size. The value is calculated with the following formula:
|
||||||
|
|
||||||
|
$$
|
||||||
|
\lceil \frac{size}{3} \rceil \cdot 4
|
||||||
|
$$
|
||||||
|
|
||||||
|
@param size the size of the decoded input
|
||||||
|
@returns the size of the resulting encoded buffer
|
||||||
|
*/
|
||||||
|
static std::size_t required_encode_size(std::size_t size) noexcept
|
||||||
|
{
|
||||||
|
return (size / 3 + (size % 3 ? 1 : 0)) * 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
static std::uint8_t _base64_value(alphabet& alphabet, char c)
|
||||||
|
{
|
||||||
|
if (c >= 'A' && c <= 'Z') {
|
||||||
|
return c - 'A';
|
||||||
|
} else if (c >= 'a' && c <= 'z') {
|
||||||
|
return c - 'a' + 26;
|
||||||
|
} else if (c >= '0' && c <= '9') {
|
||||||
|
return c - '0' + 52;
|
||||||
|
}
|
||||||
|
|
||||||
|
// comes down to alphabet
|
||||||
|
if (alphabet == alphabet::standard) {
|
||||||
|
if (c == '+') {
|
||||||
|
return 62;
|
||||||
|
} else if (c == '/') {
|
||||||
|
return 63;
|
||||||
|
}
|
||||||
|
} else if (alphabet == alphabet::url_filename_safe) {
|
||||||
|
if (c == '-') {
|
||||||
|
return 62;
|
||||||
|
} else if (c == '_') {
|
||||||
|
return 63;
|
||||||
|
}
|
||||||
|
} // auto detect
|
||||||
|
else {
|
||||||
|
if (c == '+') {
|
||||||
|
alphabet = alphabet::standard;
|
||||||
|
|
||||||
|
return 62;
|
||||||
|
} else if (c == '/') {
|
||||||
|
alphabet = alphabet::standard;
|
||||||
|
|
||||||
|
return 63;
|
||||||
|
} else if (c == '-') {
|
||||||
|
alphabet = alphabet::url_filename_safe;
|
||||||
|
|
||||||
|
return 62;
|
||||||
|
} else if (c == '_') {
|
||||||
|
alphabet = alphabet::url_filename_safe;
|
||||||
|
|
||||||
|
return 63;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
throw base64_error("invalid base64 character.");
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
#endif // !PUBLIC_DOMAIN_BASE64_HPP_
|
30
llama/build-info.cpp
Normal file
30
llama/build-info.cpp
Normal file
|
@ -0,0 +1,30 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
int LLAMA_BUILD_NUMBER = 0;
|
||||||
|
char const *LLAMA_COMMIT = "";
|
||||||
|
char const *LLAMA_COMPILER = "";
|
||||||
|
char const *LLAMA_BUILD_TARGET = "";
|
2689
llama/clip.cpp
Normal file
2689
llama/clip.cpp
Normal file
File diff suppressed because it is too large
Load diff
120
llama/clip.h
Normal file
120
llama/clip.h
Normal file
|
@ -0,0 +1,120 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef CLIP_H
|
||||||
|
#define CLIP_H
|
||||||
|
|
||||||
|
#include <stddef.h>
|
||||||
|
#include <stdint.h>
|
||||||
|
|
||||||
|
#ifdef LLAMA_SHARED
|
||||||
|
# if defined(_WIN32) && !defined(__MINGW32__)
|
||||||
|
# ifdef LLAMA_BUILD
|
||||||
|
# define CLIP_API __declspec(dllexport)
|
||||||
|
# else
|
||||||
|
# define CLIP_API __declspec(dllimport)
|
||||||
|
# endif
|
||||||
|
# else
|
||||||
|
# define CLIP_API __attribute__ ((visibility ("default")))
|
||||||
|
# endif
|
||||||
|
#else
|
||||||
|
# define CLIP_API
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
struct clip_ctx;
|
||||||
|
|
||||||
|
struct clip_image_size {
|
||||||
|
int width;
|
||||||
|
int height;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct clip_image_u8_batch {
|
||||||
|
struct clip_image_u8 * data;
|
||||||
|
size_t size;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct clip_image_f32_batch {
|
||||||
|
struct clip_image_f32 * data;
|
||||||
|
size_t size;
|
||||||
|
};
|
||||||
|
|
||||||
|
CLIP_API struct clip_ctx * clip_model_load (const char * fname, int verbosity);
|
||||||
|
CLIP_API struct clip_ctx * clip_model_load_cpu(const char * fname, int verbosity);
|
||||||
|
|
||||||
|
CLIP_API void clip_free(struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
CLIP_API size_t clip_embd_nbytes(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
CLIP_API int32_t clip_image_size (const struct clip_ctx * ctx);
|
||||||
|
CLIP_API int32_t clip_patch_size (const struct clip_ctx * ctx);
|
||||||
|
CLIP_API int32_t clip_hidden_size(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
// TODO: should be enum, not string
|
||||||
|
CLIP_API const char * clip_patch_merge_type(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
CLIP_API const int32_t * clip_image_grid(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
CLIP_API int clip_n_patches (const struct clip_ctx * ctx);
|
||||||
|
CLIP_API int clip_n_mmproj_embd(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
CLIP_API int clip_uhd_num_image_embeds_col(struct clip_ctx * ctx_clip);
|
||||||
|
CLIP_API void clip_add_load_image_size(struct clip_ctx * ctx_clip, struct clip_image_size * load_image_size);
|
||||||
|
|
||||||
|
CLIP_API struct clip_image_size * clip_image_size_init();
|
||||||
|
CLIP_API struct clip_image_u8 * clip_image_u8_init ();
|
||||||
|
CLIP_API struct clip_image_f32 * clip_image_f32_init();
|
||||||
|
|
||||||
|
CLIP_API void clip_image_u8_free (struct clip_image_u8 * img);
|
||||||
|
CLIP_API void clip_image_f32_free(struct clip_image_f32 * img);
|
||||||
|
CLIP_API void clip_image_u8_batch_free (struct clip_image_u8_batch * batch);
|
||||||
|
CLIP_API void clip_image_f32_batch_free(struct clip_image_f32_batch * batch);
|
||||||
|
|
||||||
|
CLIP_API bool clip_image_load_from_file(const char * fname, struct clip_image_u8 * img);
|
||||||
|
|
||||||
|
/** interpret bytes as an image file with length bytes_length, and use the result to populate img */
|
||||||
|
CLIP_API bool clip_image_load_from_bytes(const unsigned char * bytes, size_t bytes_length, struct clip_image_u8 * img);
|
||||||
|
|
||||||
|
/** preprocess img and store the result in res_imgs, pad_to_square may be overridden to false depending on model configuration */
|
||||||
|
CLIP_API bool clip_image_preprocess(struct clip_ctx * ctx, const struct clip_image_u8 * img, struct clip_image_f32_batch * res_imgs );
|
||||||
|
|
||||||
|
CLIP_API struct ggml_tensor * clip_get_newline_tensor(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
CLIP_API bool clip_image_encode (struct clip_ctx * ctx, int n_threads, struct clip_image_f32 * img, float * vec);
|
||||||
|
CLIP_API bool clip_image_batch_encode(struct clip_ctx * ctx, int n_threads, const struct clip_image_f32_batch * imgs, float * vec);
|
||||||
|
|
||||||
|
CLIP_API bool clip_model_quantize(const char * fname_inp, const char * fname_out, int itype);
|
||||||
|
|
||||||
|
CLIP_API int clip_is_minicpmv(const struct clip_ctx * ctx);
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#endif // CLIP_H
|
3688
llama/common.cpp
Normal file
3688
llama/common.cpp
Normal file
File diff suppressed because it is too large
Load diff
514
llama/common.h
Normal file
514
llama/common.h
Normal file
|
@ -0,0 +1,514 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// Various helper functions and utilities
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "llama.h"
|
||||||
|
|
||||||
|
#include "sampling.h"
|
||||||
|
|
||||||
|
#define LOG_NO_FILE_LINE_FUNCTION
|
||||||
|
#include "log.h"
|
||||||
|
|
||||||
|
#include <cmath>
|
||||||
|
#include <string>
|
||||||
|
#include <vector>
|
||||||
|
#include <random>
|
||||||
|
#include <thread>
|
||||||
|
#include <unordered_map>
|
||||||
|
#include <tuple>
|
||||||
|
|
||||||
|
#ifdef _WIN32
|
||||||
|
#define DIRECTORY_SEPARATOR '\\'
|
||||||
|
#else
|
||||||
|
#define DIRECTORY_SEPARATOR '/'
|
||||||
|
#endif // _WIN32
|
||||||
|
|
||||||
|
#define die(msg) do { fputs("error: " msg "\n", stderr); exit(1); } while (0)
|
||||||
|
#define die_fmt(fmt, ...) do { fprintf(stderr, "error: " fmt "\n", __VA_ARGS__); exit(1); } while (0)
|
||||||
|
|
||||||
|
#define print_build_info() do { \
|
||||||
|
fprintf(stderr, "%s: build = %d (%s)\n", __func__, LLAMA_BUILD_NUMBER, LLAMA_COMMIT); \
|
||||||
|
fprintf(stderr, "%s: built with %s for %s\n", __func__, LLAMA_COMPILER, LLAMA_BUILD_TARGET); \
|
||||||
|
} while(0)
|
||||||
|
|
||||||
|
#define DEFAULT_MODEL_PATH "models/7B/ggml-model-f16.gguf"
|
||||||
|
|
||||||
|
struct llama_lora_adapter_info {
|
||||||
|
std::string path;
|
||||||
|
float scale;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct llama_lora_adapter_container : llama_lora_adapter_info {
|
||||||
|
struct llama_lora_adapter * adapter;
|
||||||
|
};
|
||||||
|
|
||||||
|
// build info
|
||||||
|
extern int LLAMA_BUILD_NUMBER;
|
||||||
|
extern char const * LLAMA_COMMIT;
|
||||||
|
extern char const * LLAMA_COMPILER;
|
||||||
|
extern char const * LLAMA_BUILD_TARGET;
|
||||||
|
|
||||||
|
struct llama_control_vector_load_info;
|
||||||
|
|
||||||
|
//
|
||||||
|
// CPU utils
|
||||||
|
//
|
||||||
|
|
||||||
|
int32_t cpu_get_num_physical_cores();
|
||||||
|
int32_t cpu_get_num_math();
|
||||||
|
|
||||||
|
//
|
||||||
|
// CLI argument parsing
|
||||||
|
//
|
||||||
|
|
||||||
|
// dimensionality reduction methods, used by cvector-generator
|
||||||
|
enum dimre_method {
|
||||||
|
DIMRE_METHOD_PCA,
|
||||||
|
DIMRE_METHOD_MEAN,
|
||||||
|
};
|
||||||
|
|
||||||
|
struct cpu_params {
|
||||||
|
int n_threads = -1;
|
||||||
|
bool cpumask[GGML_MAX_N_THREADS] = {false}; // CPU affinity mask.
|
||||||
|
bool mask_valid = false; // Default: any CPU
|
||||||
|
enum ggml_sched_priority priority = GGML_SCHED_PRIO_NORMAL; // Scheduling prio : (0 - normal, 1 - medium, 2 - high, 3 - realtime)
|
||||||
|
bool strict_cpu = false; // Use strict CPU placement
|
||||||
|
uint32_t poll = 50; // Polling (busywait) level (0 - no polling, 100 - mostly polling)
|
||||||
|
};
|
||||||
|
|
||||||
|
struct gpt_params {
|
||||||
|
uint32_t seed = LLAMA_DEFAULT_SEED; // RNG seed
|
||||||
|
|
||||||
|
int32_t n_predict = -1; // new tokens to predict
|
||||||
|
int32_t n_ctx = 0; // context size
|
||||||
|
int32_t n_batch = 2048; // logical batch size for prompt processing (must be >=32 to use BLAS)
|
||||||
|
int32_t n_ubatch = 512; // physical batch size for prompt processing (must be >=32 to use BLAS)
|
||||||
|
int32_t n_keep = 0; // number of tokens to keep from initial prompt
|
||||||
|
int32_t n_draft = 5; // number of tokens to draft during speculative decoding
|
||||||
|
int32_t n_chunks = -1; // max number of chunks to process (-1 = unlimited)
|
||||||
|
int32_t n_parallel = 1; // number of parallel sequences to decode
|
||||||
|
int32_t n_sequences = 1; // number of sequences to decode
|
||||||
|
float p_split = 0.1f; // speculative decoding split probability
|
||||||
|
int32_t n_gpu_layers = -1; // number of layers to store in VRAM (-1 - use default)
|
||||||
|
int32_t n_gpu_layers_draft = -1; // number of layers to store in VRAM for the draft model (-1 - use default)
|
||||||
|
int32_t main_gpu = 0; // the GPU that is used for scratch and small tensors
|
||||||
|
float tensor_split[128] = {0}; // how split tensors should be distributed across GPUs
|
||||||
|
int32_t grp_attn_n = 1; // group-attention factor
|
||||||
|
int32_t grp_attn_w = 512; // group-attention width
|
||||||
|
int32_t n_print = -1; // print token count every n tokens (-1 = disabled)
|
||||||
|
float rope_freq_base = 0.0f; // RoPE base frequency
|
||||||
|
float rope_freq_scale = 0.0f; // RoPE frequency scaling factor
|
||||||
|
float yarn_ext_factor = -1.0f; // YaRN extrapolation mix factor
|
||||||
|
float yarn_attn_factor = 1.0f; // YaRN magnitude scaling factor
|
||||||
|
float yarn_beta_fast = 32.0f; // YaRN low correction dim
|
||||||
|
float yarn_beta_slow = 1.0f; // YaRN high correction dim
|
||||||
|
int32_t yarn_orig_ctx = 0; // YaRN original context length
|
||||||
|
float defrag_thold = -1.0f; // KV cache defragmentation threshold
|
||||||
|
|
||||||
|
struct cpu_params cpuparams;
|
||||||
|
struct cpu_params cpuparams_batch;
|
||||||
|
struct cpu_params draft_cpuparams;
|
||||||
|
struct cpu_params draft_cpuparams_batch;
|
||||||
|
|
||||||
|
ggml_backend_sched_eval_callback cb_eval = nullptr;
|
||||||
|
void * cb_eval_user_data = nullptr;
|
||||||
|
|
||||||
|
ggml_numa_strategy numa = GGML_NUMA_STRATEGY_DISABLED;
|
||||||
|
|
||||||
|
enum llama_split_mode split_mode = LLAMA_SPLIT_MODE_LAYER; // how to split the model across GPUs
|
||||||
|
enum llama_rope_scaling_type rope_scaling_type = LLAMA_ROPE_SCALING_TYPE_UNSPECIFIED;
|
||||||
|
enum llama_pooling_type pooling_type = LLAMA_POOLING_TYPE_UNSPECIFIED; // pooling type for embeddings
|
||||||
|
enum llama_attention_type attention_type = LLAMA_ATTENTION_TYPE_UNSPECIFIED; // attention type for embeddings
|
||||||
|
|
||||||
|
// // sampling parameters
|
||||||
|
struct llama_sampling_params sparams;
|
||||||
|
|
||||||
|
std::string model = ""; // model path
|
||||||
|
std::string model_draft = ""; // draft model for speculative decoding
|
||||||
|
std::string model_alias = "unknown"; // model alias
|
||||||
|
std::string model_url = ""; // model url to download
|
||||||
|
std::string hf_token = ""; // HF token
|
||||||
|
std::string hf_repo = ""; // HF repo
|
||||||
|
std::string hf_file = ""; // HF file
|
||||||
|
std::string prompt = "";
|
||||||
|
std::string prompt_file = ""; // store the external prompt file name
|
||||||
|
std::string path_prompt_cache = ""; // path to file for saving/loading prompt eval state
|
||||||
|
std::string input_prefix = ""; // string to prefix user inputs with
|
||||||
|
std::string input_suffix = ""; // string to suffix user inputs with
|
||||||
|
std::string logdir = ""; // directory in which to save YAML log files
|
||||||
|
std::string lookup_cache_static = ""; // path of static ngram cache file for lookup decoding
|
||||||
|
std::string lookup_cache_dynamic = ""; // path of dynamic ngram cache file for lookup decoding
|
||||||
|
std::string logits_file = ""; // file for saving *all* logits
|
||||||
|
std::string rpc_servers = ""; // comma separated list of RPC servers
|
||||||
|
|
||||||
|
std::vector<std::string> in_files; // all input files
|
||||||
|
std::vector<std::string> antiprompt; // strings upon which more user input is prompted (a.k.a. reverse prompts)
|
||||||
|
std::vector<llama_model_kv_override> kv_overrides;
|
||||||
|
|
||||||
|
bool lora_init_without_apply = false; // only load lora to memory, but do not apply it to ctx (user can manually apply lora later using llama_lora_adapter_apply)
|
||||||
|
std::vector<llama_lora_adapter_info> lora_adapters; // lora adapter path with user defined scale
|
||||||
|
|
||||||
|
std::vector<llama_control_vector_load_info> control_vectors; // control vector with user defined scale
|
||||||
|
|
||||||
|
int32_t verbosity = 0;
|
||||||
|
int32_t control_vector_layer_start = -1; // layer range for control vector
|
||||||
|
int32_t control_vector_layer_end = -1; // layer range for control vector
|
||||||
|
|
||||||
|
int32_t ppl_stride = 0; // stride for perplexity calculations. If left at 0, the pre-existing approach will be used.
|
||||||
|
int32_t ppl_output_type = 0; // = 0 -> ppl output is as usual, = 1 -> ppl output is num_tokens, ppl, one per line
|
||||||
|
// (which is more convenient to use for plotting)
|
||||||
|
//
|
||||||
|
bool hellaswag = false; // compute HellaSwag score over random tasks from datafile supplied in prompt
|
||||||
|
size_t hellaswag_tasks = 400; // number of tasks to use when computing the HellaSwag score
|
||||||
|
|
||||||
|
bool winogrande = false; // compute Winogrande score over random tasks from datafile supplied in prompt
|
||||||
|
size_t winogrande_tasks = 0; // number of tasks to use when computing the Winogrande score. If 0, all tasks will be computed
|
||||||
|
|
||||||
|
bool multiple_choice = false; // compute TruthfulQA score over random tasks from datafile supplied in prompt
|
||||||
|
size_t multiple_choice_tasks = 0; // number of tasks to use when computing the TruthfulQA score. If 0, all tasks will be computed
|
||||||
|
|
||||||
|
bool kl_divergence = false; // compute KL divergence
|
||||||
|
|
||||||
|
bool usage = false; // print usage
|
||||||
|
bool use_color = false; // use color to distinguish generations and inputs
|
||||||
|
bool special = false; // enable special token output
|
||||||
|
bool interactive = false; // interactive mode
|
||||||
|
bool interactive_first = false; // wait for user input immediately
|
||||||
|
bool conversation = false; // conversation mode (does not print special tokens and suffix/prefix)
|
||||||
|
bool prompt_cache_all = false; // save user input and generations to prompt cache
|
||||||
|
bool prompt_cache_ro = false; // open the prompt cache read-only and do not update it
|
||||||
|
|
||||||
|
bool escape = true; // escape "\n", "\r", "\t", "\'", "\"", and "\\"
|
||||||
|
bool multiline_input = false; // reverse the usage of `\`
|
||||||
|
bool simple_io = false; // improves compatibility with subprocesses and limited consoles
|
||||||
|
bool cont_batching = true; // insert new sequences for decoding on-the-fly
|
||||||
|
bool flash_attn = false; // flash attention
|
||||||
|
|
||||||
|
bool input_prefix_bos = false; // prefix BOS to user inputs, preceding input_prefix
|
||||||
|
bool ignore_eos = false; // ignore generated EOS tokens
|
||||||
|
bool logits_all = false; // return logits for all tokens in the batch
|
||||||
|
bool use_mmap = true; // use mmap for faster loads
|
||||||
|
bool use_mlock = false; // use mlock to keep model in memory
|
||||||
|
bool verbose_prompt = false; // print prompt tokens before generation
|
||||||
|
bool display_prompt = true; // print prompt before generation
|
||||||
|
bool infill = false; // use infill mode
|
||||||
|
bool dump_kv_cache = false; // dump the KV cache contents for debugging purposes
|
||||||
|
bool no_kv_offload = false; // disable KV offloading
|
||||||
|
bool warmup = true; // warmup run
|
||||||
|
bool check_tensors = false; // validate tensor data
|
||||||
|
|
||||||
|
std::string cache_type_k = "f16"; // KV cache data type for the K
|
||||||
|
std::string cache_type_v = "f16"; // KV cache data type for the V
|
||||||
|
|
||||||
|
// multimodal models (see examples/llava)
|
||||||
|
std::string mmproj = ""; // path to multimodal projector
|
||||||
|
std::vector<std::string> image; // path to image file(s)
|
||||||
|
|
||||||
|
// embedding
|
||||||
|
bool embedding = false; // get only sentence embedding
|
||||||
|
int32_t embd_normalize = 2; // normalisation for embendings (-1=none, 0=max absolute int16, 1=taxicab, 2=euclidean, >2=p-norm)
|
||||||
|
std::string embd_out = ""; // empty = default, "array" = [[],[]...], "json" = openai style, "json+" = same "json" + cosine similarity matrix
|
||||||
|
std::string embd_sep = "\n"; // separator of embendings
|
||||||
|
|
||||||
|
// server params
|
||||||
|
int32_t port = 8080; // server listens on this network port
|
||||||
|
int32_t timeout_read = 600; // http read timeout in seconds
|
||||||
|
int32_t timeout_write = timeout_read; // http write timeout in seconds
|
||||||
|
int n_threads_http = -1; // number of threads to process HTTP requests (TODO: support threadpool)
|
||||||
|
|
||||||
|
std::string hostname = "127.0.0.1";
|
||||||
|
std::string public_path = "";
|
||||||
|
std::string chat_template = "";
|
||||||
|
std::string system_prompt = "";
|
||||||
|
bool enable_chat_template = true;
|
||||||
|
|
||||||
|
std::vector<std::string> api_keys;
|
||||||
|
|
||||||
|
std::string ssl_file_key = "";
|
||||||
|
std::string ssl_file_cert = "";
|
||||||
|
|
||||||
|
bool endpoint_slots = true;
|
||||||
|
bool endpoint_metrics = false;
|
||||||
|
|
||||||
|
bool log_json = false;
|
||||||
|
|
||||||
|
std::string slot_save_path;
|
||||||
|
|
||||||
|
float slot_prompt_similarity = 0.5f;
|
||||||
|
|
||||||
|
// batched-bench params
|
||||||
|
bool is_pp_shared = false;
|
||||||
|
|
||||||
|
std::vector<int32_t> n_pp;
|
||||||
|
std::vector<int32_t> n_tg;
|
||||||
|
std::vector<int32_t> n_pl;
|
||||||
|
|
||||||
|
// retrieval params
|
||||||
|
std::vector<std::string> context_files; // context files to embed
|
||||||
|
|
||||||
|
int32_t chunk_size = 64; // chunk size for context embedding
|
||||||
|
|
||||||
|
std::string chunk_separator = "\n"; // chunk separator for context embedding
|
||||||
|
|
||||||
|
// passkey params
|
||||||
|
int32_t n_junk = 250; // number of times to repeat the junk text
|
||||||
|
int32_t i_pos = -1; // position of the passkey in the junk text
|
||||||
|
|
||||||
|
// imatrix params
|
||||||
|
std::string out_file = "imatrix.dat"; // save the resulting imatrix to this file
|
||||||
|
|
||||||
|
int32_t n_out_freq = 10; // output the imatrix every n_out_freq iterations
|
||||||
|
int32_t n_save_freq = 0; // save the imatrix every n_save_freq iterations
|
||||||
|
int32_t i_chunk = 0; // start processing from this chunk
|
||||||
|
|
||||||
|
bool process_output = false; // collect data for the output tensor
|
||||||
|
bool compute_ppl = true; // whether to compute perplexity
|
||||||
|
|
||||||
|
// cvector-generator params
|
||||||
|
int n_pca_batch = 100;
|
||||||
|
int n_pca_iterations = 1000;
|
||||||
|
dimre_method cvector_dimre_method = DIMRE_METHOD_PCA;
|
||||||
|
std::string cvector_outfile = "control_vector.gguf";
|
||||||
|
std::string cvector_positive_file = "examples/cvector-generator/positive.txt";
|
||||||
|
std::string cvector_negative_file = "examples/cvector-generator/negative.txt";
|
||||||
|
|
||||||
|
bool spm_infill = false; // suffix/prefix/middle pattern for infill
|
||||||
|
|
||||||
|
std::string lora_outfile = "ggml-lora-merged-f16.gguf";
|
||||||
|
};
|
||||||
|
|
||||||
|
void gpt_params_parse_from_env(gpt_params & params);
|
||||||
|
void gpt_params_handle_model_default(gpt_params & params);
|
||||||
|
|
||||||
|
bool gpt_params_parse_ex (int argc, char ** argv, gpt_params & params);
|
||||||
|
bool gpt_params_parse (int argc, char ** argv, gpt_params & params);
|
||||||
|
bool gpt_params_find_arg (int argc, char ** argv, const std::string & arg, gpt_params & params, int & i, bool & invalid_param);
|
||||||
|
void gpt_params_print_usage(int argc, char ** argv, const gpt_params & params);
|
||||||
|
|
||||||
|
std::string gpt_params_get_system_info(const gpt_params & params);
|
||||||
|
|
||||||
|
bool parse_cpu_range(const std::string& range, bool(&boolmask)[GGML_MAX_N_THREADS]);
|
||||||
|
bool parse_cpu_mask(const std::string& mask, bool(&boolmask)[GGML_MAX_N_THREADS]);
|
||||||
|
void postprocess_cpu_params(cpu_params& cpuparams, const cpu_params* role_model = nullptr);
|
||||||
|
bool set_process_priority(enum ggml_sched_priority prio);
|
||||||
|
|
||||||
|
//
|
||||||
|
// String utils
|
||||||
|
//
|
||||||
|
|
||||||
|
std::vector<std::string> string_split(std::string input, char separator);
|
||||||
|
|
||||||
|
std::string string_strip(const std::string & str);
|
||||||
|
std::string string_get_sortable_timestamp();
|
||||||
|
|
||||||
|
void string_replace_all(std::string & s, const std::string & search, const std::string & replace);
|
||||||
|
|
||||||
|
template<class T>
|
||||||
|
static std::vector<T> string_split(const std::string & str, char delim) {
|
||||||
|
std::vector<T> values;
|
||||||
|
std::istringstream str_stream(str);
|
||||||
|
std::string token;
|
||||||
|
while (std::getline(str_stream, token, delim)) {
|
||||||
|
T value;
|
||||||
|
std::istringstream token_stream(token);
|
||||||
|
token_stream >> value;
|
||||||
|
values.push_back(value);
|
||||||
|
}
|
||||||
|
return values;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool string_parse_kv_override(const char * data, std::vector<llama_model_kv_override> & overrides);
|
||||||
|
void string_process_escapes(std::string & input);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Filesystem utils
|
||||||
|
//
|
||||||
|
|
||||||
|
bool fs_validate_filename(const std::string & filename);
|
||||||
|
bool fs_create_directory_with_parents(const std::string & path);
|
||||||
|
|
||||||
|
std::string fs_get_cache_directory();
|
||||||
|
std::string fs_get_cache_file(const std::string & filename);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Model utils
|
||||||
|
//
|
||||||
|
|
||||||
|
struct llama_init_result {
|
||||||
|
struct llama_model * model = nullptr;
|
||||||
|
struct llama_context * context = nullptr;
|
||||||
|
std::vector<llama_lora_adapter_container> lora_adapters;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct llama_init_result llama_init_from_gpt_params(gpt_params & params);
|
||||||
|
|
||||||
|
struct llama_model_params llama_model_params_from_gpt_params (const gpt_params & params);
|
||||||
|
struct llama_context_params llama_context_params_from_gpt_params (const gpt_params & params);
|
||||||
|
struct ggml_threadpool_params ggml_threadpool_params_from_cpu_params(const cpu_params & params);
|
||||||
|
|
||||||
|
struct llama_model * llama_load_model_from_url(const char * model_url, const char * path_model, const char * hf_token, const struct llama_model_params & params);
|
||||||
|
struct llama_model * llama_load_model_from_hf(const char * repo, const char * file, const char * path_model, const char * hf_token, const struct llama_model_params & params);
|
||||||
|
|
||||||
|
// clear LoRA adapters from context, then apply new list of adapters
|
||||||
|
void llama_lora_adapters_apply(struct llama_context * ctx, std::vector<llama_lora_adapter_container> & lora_adapters);
|
||||||
|
|
||||||
|
// Batch utils
|
||||||
|
|
||||||
|
void llama_batch_clear(struct llama_batch & batch);
|
||||||
|
|
||||||
|
void llama_batch_add(
|
||||||
|
struct llama_batch & batch,
|
||||||
|
llama_token id,
|
||||||
|
llama_pos pos,
|
||||||
|
const std::vector<llama_seq_id> & seq_ids,
|
||||||
|
bool logits);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Vocab utils
|
||||||
|
//
|
||||||
|
|
||||||
|
// tokenizes a string into a vector of tokens
|
||||||
|
// should work similar to Python's `tokenizer.encode`
|
||||||
|
std::vector<llama_token> llama_tokenize(
|
||||||
|
const struct llama_context * ctx,
|
||||||
|
const std::string & text,
|
||||||
|
bool add_special,
|
||||||
|
bool parse_special = false);
|
||||||
|
|
||||||
|
std::vector<llama_token> llama_tokenize(
|
||||||
|
const struct llama_model * model,
|
||||||
|
const std::string & text,
|
||||||
|
bool add_special,
|
||||||
|
bool parse_special = false);
|
||||||
|
|
||||||
|
// tokenizes a token into a piece, optionally renders special/control tokens
|
||||||
|
// should work similar to Python's `tokenizer.id_to_piece`
|
||||||
|
std::string llama_token_to_piece(
|
||||||
|
const struct llama_context * ctx,
|
||||||
|
llama_token token,
|
||||||
|
bool special = true);
|
||||||
|
|
||||||
|
// detokenizes a vector of tokens into a string
|
||||||
|
// should work similar to Python's `tokenizer.decode`
|
||||||
|
// optionally renders special/control tokens
|
||||||
|
std::string llama_detokenize(
|
||||||
|
llama_context * ctx,
|
||||||
|
const std::vector<llama_token> & tokens,
|
||||||
|
bool special = true);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Chat template utils
|
||||||
|
//
|
||||||
|
|
||||||
|
// same with llama_chat_message, but uses std::string
|
||||||
|
struct llama_chat_msg {
|
||||||
|
std::string role;
|
||||||
|
std::string content;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Check if the template supplied via "--chat-template" is supported or not. Returns true if it's valid
|
||||||
|
bool llama_chat_verify_template(const std::string & tmpl);
|
||||||
|
|
||||||
|
// CPP wrapper for llama_chat_apply_template
|
||||||
|
// If the built-in template is not supported, we default to chatml
|
||||||
|
// If the custom "tmpl" is not supported, we throw an error
|
||||||
|
std::string llama_chat_apply_template(const struct llama_model * model,
|
||||||
|
const std::string & tmpl,
|
||||||
|
const std::vector<llama_chat_msg> & chat,
|
||||||
|
bool add_ass);
|
||||||
|
|
||||||
|
// Format single message, while taking into account the position of that message in chat history
|
||||||
|
std::string llama_chat_format_single(const struct llama_model * model,
|
||||||
|
const std::string & tmpl,
|
||||||
|
const std::vector<llama_chat_msg> & past_msg,
|
||||||
|
const llama_chat_msg & new_msg,
|
||||||
|
bool add_ass);
|
||||||
|
|
||||||
|
// Returns an example of formatted chat
|
||||||
|
std::string llama_chat_format_example(const struct llama_model * model,
|
||||||
|
const std::string & tmpl);
|
||||||
|
|
||||||
|
//
|
||||||
|
// KV cache utils
|
||||||
|
//
|
||||||
|
|
||||||
|
// Dump the KV cache view with the number of sequences per cell.
|
||||||
|
void llama_kv_cache_dump_view(const llama_kv_cache_view & view, int row_size = 80);
|
||||||
|
|
||||||
|
// Dump the KV cache view showing individual sequences in each cell (long output).
|
||||||
|
void llama_kv_cache_dump_view_seqs(const llama_kv_cache_view & view, int row_size = 40);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Embedding utils
|
||||||
|
//
|
||||||
|
|
||||||
|
void llama_embd_normalize(const float * inp, float * out, int n, int embd_norm = 2);
|
||||||
|
|
||||||
|
float llama_embd_similarity_cos(const float * embd1, const float * embd2, int n);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Control vector utils
|
||||||
|
//
|
||||||
|
|
||||||
|
struct llama_control_vector_data {
|
||||||
|
int n_embd;
|
||||||
|
|
||||||
|
// stores data for layers [1, n_layer] where n_layer = data.size() / n_embd
|
||||||
|
std::vector<float> data;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct llama_control_vector_load_info {
|
||||||
|
float strength;
|
||||||
|
|
||||||
|
std::string fname;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Load control vectors, scale each by strength, and add them together.
|
||||||
|
// On error, returns {-1, empty}
|
||||||
|
llama_control_vector_data llama_control_vector_load(const std::vector<llama_control_vector_load_info> & load_infos);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Split utils
|
||||||
|
//
|
||||||
|
|
||||||
|
static const char * const LLM_KV_SPLIT_NO = "split.no";
|
||||||
|
static const char * const LLM_KV_SPLIT_COUNT = "split.count";
|
||||||
|
static const char * const LLM_KV_SPLIT_TENSORS_COUNT = "split.tensors.count";
|
||||||
|
|
||||||
|
//
|
||||||
|
// YAML utils
|
||||||
|
//
|
||||||
|
|
||||||
|
void yaml_dump_vector_float (FILE * stream, const char * prop_name, const std::vector<float> & data);
|
||||||
|
void yaml_dump_vector_int (FILE * stream, const char * prop_name, const std::vector<int> & data);
|
||||||
|
void yaml_dump_string_multiline(FILE * stream, const char * prop_name, const char * data);
|
||||||
|
|
||||||
|
void yaml_dump_non_result_info(
|
||||||
|
FILE * stream, const gpt_params & params, const llama_context * lctx,
|
||||||
|
const std::string & timestamp, const std::vector<int> & prompt_tokens, const char * model_desc);
|
2206
llama/ggml-aarch64.c
Normal file
2206
llama/ggml-aarch64.c
Normal file
File diff suppressed because it is too large
Load diff
65
llama/ggml-aarch64.h
Normal file
65
llama/ggml-aarch64.h
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// SPDX-FileCopyrightText: Copyright 2024 Arm Ltd.
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#define GGML_COMMON_DECL_C
|
||||||
|
#include "ggml-common.h"
|
||||||
|
|
||||||
|
#include "ggml.h"
|
||||||
|
|
||||||
|
// GGML internal header
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
// Quantization
|
||||||
|
void quantize_q8_0_4x4(const float * GGML_RESTRICT x, void * GGML_RESTRICT y, int64_t k);
|
||||||
|
void quantize_q8_0_4x8(const float * GGML_RESTRICT x, void * GGML_RESTRICT y, int64_t k);
|
||||||
|
|
||||||
|
void quantize_mat_q8_0(const float * GGML_RESTRICT x, void * GGML_RESTRICT y, int64_t nrows, int64_t n_per_row, int64_t blck_size_interleave);
|
||||||
|
|
||||||
|
// Quantization utilizing an importance matrix (a.k.a. "Activation aWare Quantization")
|
||||||
|
size_t quantize_q4_0_4x4(const float * GGML_RESTRICT src, void * GGML_RESTRICT dst, int64_t nrows, int64_t n_per_row, const float * imatrix);
|
||||||
|
size_t quantize_q4_0_4x8(const float * GGML_RESTRICT src, void * GGML_RESTRICT dst, int64_t nrows, int64_t n_per_row, const float * imatrix);
|
||||||
|
size_t quantize_q4_0_8x8(const float * GGML_RESTRICT src, void * GGML_RESTRICT dst, int64_t nrows, int64_t n_per_row, const float * imatrix);
|
||||||
|
|
||||||
|
// GEMV
|
||||||
|
void ggml_gemv_q4_0_4x4_q8_0(int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT vx, const void * GGML_RESTRICT vy, int nr, int nc);
|
||||||
|
void ggml_gemv_q4_0_4x8_q8_0(int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT vx, const void * GGML_RESTRICT vy, int nr, int nc);
|
||||||
|
void ggml_gemv_q4_0_8x8_q8_0(int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT vx, const void * GGML_RESTRICT vy, int nr, int nc);
|
||||||
|
|
||||||
|
// GEMM
|
||||||
|
void ggml_gemm_q4_0_4x4_q8_0(int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT vx, const void * GGML_RESTRICT vy, int nr, int nc);
|
||||||
|
void ggml_gemm_q4_0_4x8_q8_0(int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT vx, const void * GGML_RESTRICT vy, int nr, int nc);
|
||||||
|
void ggml_gemm_q4_0_8x8_q8_0(int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT vx, const void * GGML_RESTRICT vy, int nr, int nc);
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
1062
llama/ggml-alloc.c
Normal file
1062
llama/ggml-alloc.c
Normal file
File diff suppressed because it is too large
Load diff
102
llama/ggml-alloc.h
Normal file
102
llama/ggml-alloc.h
Normal file
|
@ -0,0 +1,102 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "ggml.h"
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
typedef struct ggml_backend_buffer_type * ggml_backend_buffer_type_t;
|
||||||
|
typedef struct ggml_backend_buffer * ggml_backend_buffer_t;
|
||||||
|
typedef struct ggml_backend * ggml_backend_t;
|
||||||
|
|
||||||
|
// Tensor allocator
|
||||||
|
struct ggml_tallocr {
|
||||||
|
ggml_backend_buffer_t buffer;
|
||||||
|
void * base;
|
||||||
|
size_t alignment;
|
||||||
|
size_t offset;
|
||||||
|
};
|
||||||
|
|
||||||
|
GGML_API struct ggml_tallocr ggml_tallocr_new(ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API void ggml_tallocr_alloc(struct ggml_tallocr * talloc, struct ggml_tensor * tensor);
|
||||||
|
|
||||||
|
// Graph allocator
|
||||||
|
/*
|
||||||
|
Example usage:
|
||||||
|
ggml_gallocr_t galloc = ggml_gallocr_new(ggml_bacckend_cpu_buffer_type());
|
||||||
|
|
||||||
|
// optional: create a worst-case graph and reserve the buffers to avoid reallocations
|
||||||
|
ggml_gallocr_reserve(galloc, build_graph(max_batch));
|
||||||
|
|
||||||
|
// allocate the graph
|
||||||
|
struct ggml_cgraph * graph = build_graph(batch);
|
||||||
|
ggml_gallocr_alloc_graph(galloc, graph);
|
||||||
|
|
||||||
|
printf("compute buffer size: %zu bytes\n", ggml_gallocr_get_buffer_size(galloc, 0));
|
||||||
|
|
||||||
|
// evaluate the graph
|
||||||
|
ggml_backend_graph_compute(backend, graph);
|
||||||
|
*/
|
||||||
|
|
||||||
|
// special tensor flags for use with the graph allocator:
|
||||||
|
// ggml_set_input(): all input tensors are allocated at the beginning of the graph in non-overlapping addresses
|
||||||
|
// ggml_set_output(): output tensors are never freed and never overwritten
|
||||||
|
|
||||||
|
typedef struct ggml_gallocr * ggml_gallocr_t;
|
||||||
|
|
||||||
|
GGML_API ggml_gallocr_t ggml_gallocr_new(ggml_backend_buffer_type_t buft);
|
||||||
|
GGML_API ggml_gallocr_t ggml_gallocr_new_n(ggml_backend_buffer_type_t * bufts, int n_bufs);
|
||||||
|
GGML_API void ggml_gallocr_free(ggml_gallocr_t galloc);
|
||||||
|
|
||||||
|
// pre-allocate buffers from a measure graph - does not allocate or modify the graph
|
||||||
|
// call with a worst-case graph to avoid buffer reallocations
|
||||||
|
// not strictly required for single buffer usage: ggml_gallocr_alloc_graph will reallocate the buffers automatically if needed
|
||||||
|
// returns false if the buffer allocation failed
|
||||||
|
GGML_API bool ggml_gallocr_reserve(ggml_gallocr_t galloc, struct ggml_cgraph * graph);
|
||||||
|
GGML_API bool ggml_gallocr_reserve_n(
|
||||||
|
ggml_gallocr_t galloc,
|
||||||
|
struct ggml_cgraph * graph,
|
||||||
|
const int * node_buffer_ids,
|
||||||
|
const int * leaf_buffer_ids);
|
||||||
|
|
||||||
|
// automatic reallocation if the topology changes when using a single buffer
|
||||||
|
// returns false if using multiple buffers and a re-allocation is needed (call ggml_gallocr_reserve_n first to set the node buffers)
|
||||||
|
GGML_API bool ggml_gallocr_alloc_graph(ggml_gallocr_t galloc, struct ggml_cgraph * graph);
|
||||||
|
|
||||||
|
GGML_API size_t ggml_gallocr_get_buffer_size(ggml_gallocr_t galloc, int buffer_id);
|
||||||
|
|
||||||
|
// Utils
|
||||||
|
// Create a buffer and allocate all the tensors in a ggml_context
|
||||||
|
GGML_API struct ggml_backend_buffer * ggml_backend_alloc_ctx_tensors_from_buft(struct ggml_context * ctx, ggml_backend_buffer_type_t buft);
|
||||||
|
GGML_API struct ggml_backend_buffer * ggml_backend_alloc_ctx_tensors(struct ggml_context * ctx, ggml_backend_t backend);
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
179
llama/ggml-backend-impl.h
Normal file
179
llama/ggml-backend-impl.h
Normal file
|
@ -0,0 +1,179 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
// ggml-backend internal header
|
||||||
|
|
||||||
|
#include "ggml-backend.h"
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend buffer
|
||||||
|
//
|
||||||
|
|
||||||
|
// buffer type
|
||||||
|
typedef void * ggml_backend_buffer_type_context_t;
|
||||||
|
|
||||||
|
struct ggml_backend_buffer_type_i {
|
||||||
|
const char * (*GGML_CALL get_name) (ggml_backend_buffer_type_t buft);
|
||||||
|
// allocate a buffer of this type
|
||||||
|
ggml_backend_buffer_t (*GGML_CALL alloc_buffer) (ggml_backend_buffer_type_t buft, size_t size);
|
||||||
|
// tensor alignment
|
||||||
|
size_t (*GGML_CALL get_alignment) (ggml_backend_buffer_type_t buft);
|
||||||
|
// max buffer size that can be allocated
|
||||||
|
size_t (*GGML_CALL get_max_size) (ggml_backend_buffer_type_t buft);
|
||||||
|
// data size needed to allocate the tensor, including padding
|
||||||
|
size_t (*GGML_CALL get_alloc_size) (ggml_backend_buffer_type_t buft, const struct ggml_tensor * tensor);
|
||||||
|
// check if tensor data is in host memory
|
||||||
|
bool (*GGML_CALL is_host) (ggml_backend_buffer_type_t buft);
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ggml_backend_buffer_type {
|
||||||
|
struct ggml_backend_buffer_type_i iface;
|
||||||
|
ggml_backend_buffer_type_context_t context;
|
||||||
|
};
|
||||||
|
|
||||||
|
// buffer
|
||||||
|
typedef void * ggml_backend_buffer_context_t;
|
||||||
|
|
||||||
|
struct ggml_backend_buffer_i {
|
||||||
|
const char * (*GGML_CALL get_name) (ggml_backend_buffer_t buffer);
|
||||||
|
void (*GGML_CALL free_buffer)(ggml_backend_buffer_t buffer);
|
||||||
|
void * (*GGML_CALL get_base) (ggml_backend_buffer_t buffer);
|
||||||
|
void (*GGML_CALL init_tensor)(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
|
||||||
|
void (*GGML_CALL set_tensor) (ggml_backend_buffer_t buffer, struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
|
||||||
|
void (*GGML_CALL get_tensor) (ggml_backend_buffer_t buffer, const struct ggml_tensor * tensor, void * data, size_t offset, size_t size);
|
||||||
|
bool (*GGML_CALL cpy_tensor) (ggml_backend_buffer_t buffer, const struct ggml_tensor * src, struct ggml_tensor * dst); // dst is in the buffer, src may be in any buffer
|
||||||
|
void (*GGML_CALL clear) (ggml_backend_buffer_t buffer, uint8_t value);
|
||||||
|
void (*GGML_CALL reset) (ggml_backend_buffer_t buffer); // reset any internal state due to tensor initialization, such as tensor extras
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ggml_backend_buffer {
|
||||||
|
struct ggml_backend_buffer_i iface;
|
||||||
|
ggml_backend_buffer_type_t buft;
|
||||||
|
ggml_backend_buffer_context_t context;
|
||||||
|
size_t size;
|
||||||
|
enum ggml_backend_buffer_usage usage;
|
||||||
|
};
|
||||||
|
|
||||||
|
GGML_CALL ggml_backend_buffer_t ggml_backend_buffer_init(
|
||||||
|
ggml_backend_buffer_type_t buft,
|
||||||
|
struct ggml_backend_buffer_i iface,
|
||||||
|
ggml_backend_buffer_context_t context,
|
||||||
|
size_t size);
|
||||||
|
|
||||||
|
// do not use directly, use ggml_backend_tensor_copy instead
|
||||||
|
bool ggml_backend_buffer_copy_tensor(const struct ggml_tensor * src, struct ggml_tensor * dst);
|
||||||
|
|
||||||
|
// buffer that contains a collection of buffers
|
||||||
|
GGML_CALL ggml_backend_buffer_t ggml_backend_multi_buffer_alloc_buffer(ggml_backend_buffer_t * buffers, size_t n_buffers);
|
||||||
|
GGML_CALL bool ggml_backend_buffer_is_multi_buffer(ggml_backend_buffer_t buffer);
|
||||||
|
GGML_CALL void ggml_backend_multi_buffer_set_usage(ggml_backend_buffer_t buffer, enum ggml_backend_buffer_usage usage);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend
|
||||||
|
//
|
||||||
|
|
||||||
|
typedef void * ggml_backend_context_t;
|
||||||
|
|
||||||
|
struct ggml_backend_i {
|
||||||
|
const char * (*GGML_CALL get_name)(ggml_backend_t backend);
|
||||||
|
|
||||||
|
void (*GGML_CALL free)(ggml_backend_t backend);
|
||||||
|
|
||||||
|
// buffer allocation
|
||||||
|
ggml_backend_buffer_type_t (*GGML_CALL get_default_buffer_type)(ggml_backend_t backend);
|
||||||
|
|
||||||
|
// (optional) asynchronous tensor data access
|
||||||
|
void (*GGML_CALL set_tensor_async)(ggml_backend_t backend, struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
|
||||||
|
void (*GGML_CALL get_tensor_async)(ggml_backend_t backend, const struct ggml_tensor * tensor, void * data, size_t offset, size_t size);
|
||||||
|
bool (*GGML_CALL cpy_tensor_async)(ggml_backend_t backend_src, ggml_backend_t backend_dst, const struct ggml_tensor * src, struct ggml_tensor * dst);
|
||||||
|
|
||||||
|
// (optional) complete all pending operations
|
||||||
|
void (*GGML_CALL synchronize)(ggml_backend_t backend);
|
||||||
|
|
||||||
|
// compute graph with a plan (not used currently)
|
||||||
|
// create a new plan for a graph
|
||||||
|
ggml_backend_graph_plan_t (*GGML_CALL graph_plan_create) (ggml_backend_t backend, const struct ggml_cgraph * cgraph);
|
||||||
|
void (*GGML_CALL graph_plan_free) (ggml_backend_t backend, ggml_backend_graph_plan_t plan);
|
||||||
|
// update the plan with a new graph - this should be faster than creating a new plan when the graph has the same topology
|
||||||
|
void (*GGML_CALL graph_plan_update) (ggml_backend_t backend, ggml_backend_graph_plan_t plan, const struct ggml_cgraph * cgraph);
|
||||||
|
// compute the graph with the plan
|
||||||
|
enum ggml_status (*GGML_CALL graph_plan_compute)(ggml_backend_t backend, ggml_backend_graph_plan_t plan);
|
||||||
|
|
||||||
|
// compute graph without a plan (async)
|
||||||
|
enum ggml_status (*GGML_CALL graph_compute) (ggml_backend_t backend, struct ggml_cgraph * cgraph);
|
||||||
|
|
||||||
|
// check if the backend can compute an operation
|
||||||
|
bool (*GGML_CALL supports_op)(ggml_backend_t backend, const struct ggml_tensor * op);
|
||||||
|
|
||||||
|
// check if the backend can use tensors allocated in a buffer type
|
||||||
|
bool (*GGML_CALL supports_buft)(ggml_backend_t backend, ggml_backend_buffer_type_t buft);
|
||||||
|
|
||||||
|
// check if the backend wants to run an operation, even if the weights are allocated in a CPU buffer
|
||||||
|
// these should be expensive operations with large batch sizes that may benefit from running on this backend
|
||||||
|
// even if the weight has to be copied from the CPU temporarily
|
||||||
|
bool (*GGML_CALL offload_op)(ggml_backend_t backend, const struct ggml_tensor * op);
|
||||||
|
|
||||||
|
// (optional) event synchronization
|
||||||
|
// create a new event that can record events on this backend instance
|
||||||
|
ggml_backend_event_t (*GGML_CALL event_new) (ggml_backend_t backend);
|
||||||
|
void (*GGML_CALL event_free) (ggml_backend_event_t event);
|
||||||
|
// record an event on the backend instance that created it
|
||||||
|
void (*GGML_CALL event_record) (ggml_backend_event_t event);
|
||||||
|
// wait for an event on on a different backend instance
|
||||||
|
void (*GGML_CALL event_wait) (ggml_backend_t backend, ggml_backend_event_t event);
|
||||||
|
// block until an event is recorded
|
||||||
|
void (*GGML_CALL event_synchronize) (ggml_backend_event_t event);
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ggml_backend {
|
||||||
|
ggml_guid_t guid;
|
||||||
|
|
||||||
|
struct ggml_backend_i iface;
|
||||||
|
ggml_backend_context_t context;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ggml_backend_event {
|
||||||
|
ggml_backend_t backend;
|
||||||
|
void * context;
|
||||||
|
};
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend registry
|
||||||
|
//
|
||||||
|
|
||||||
|
typedef ggml_backend_t (*GGML_CALL ggml_backend_init_fn)(const char * params, void * user_data);
|
||||||
|
|
||||||
|
GGML_CALL void ggml_backend_register(const char * name, ggml_backend_init_fn init_fn, ggml_backend_buffer_type_t default_buffer_type, void * user_data);
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
2288
llama/ggml-backend.c
Normal file
2288
llama/ggml-backend.c
Normal file
File diff suppressed because it is too large
Load diff
266
llama/ggml-backend.h
Normal file
266
llama/ggml-backend.h
Normal file
|
@ -0,0 +1,266 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "ggml.h"
|
||||||
|
#include "ggml-alloc.h"
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
typedef struct ggml_backend_buffer_type * ggml_backend_buffer_type_t;
|
||||||
|
typedef struct ggml_backend_buffer * ggml_backend_buffer_t;
|
||||||
|
typedef struct ggml_backend_event * ggml_backend_event_t;
|
||||||
|
typedef struct ggml_backend * ggml_backend_t;
|
||||||
|
typedef void * ggml_backend_graph_plan_t;
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend buffer
|
||||||
|
//
|
||||||
|
|
||||||
|
// buffer type
|
||||||
|
GGML_API const char * ggml_backend_buft_name (ggml_backend_buffer_type_t buft);
|
||||||
|
GGML_API GGML_CALL ggml_backend_buffer_t ggml_backend_buft_alloc_buffer (ggml_backend_buffer_type_t buft, size_t size);
|
||||||
|
GGML_API size_t ggml_backend_buft_get_alignment (ggml_backend_buffer_type_t buft);
|
||||||
|
GGML_API size_t ggml_backend_buft_get_max_size (ggml_backend_buffer_type_t buft);
|
||||||
|
GGML_API GGML_CALL size_t ggml_backend_buft_get_alloc_size (ggml_backend_buffer_type_t buft, struct ggml_tensor * tensor);
|
||||||
|
GGML_API bool ggml_backend_buft_is_host (ggml_backend_buffer_type_t buft);
|
||||||
|
|
||||||
|
// buffer
|
||||||
|
enum ggml_backend_buffer_usage {
|
||||||
|
GGML_BACKEND_BUFFER_USAGE_ANY = 0,
|
||||||
|
GGML_BACKEND_BUFFER_USAGE_WEIGHTS = 1,
|
||||||
|
GGML_BACKEND_BUFFER_USAGE_COMPUTE = 2,
|
||||||
|
};
|
||||||
|
|
||||||
|
GGML_API const char * ggml_backend_buffer_name (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API void ggml_backend_buffer_free (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API void * ggml_backend_buffer_get_base (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API size_t ggml_backend_buffer_get_size (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API GGML_CALL void ggml_backend_buffer_init_tensor (ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
|
||||||
|
GGML_API size_t ggml_backend_buffer_get_alignment (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API size_t ggml_backend_buffer_get_max_size (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API size_t ggml_backend_buffer_get_alloc_size(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
|
||||||
|
GGML_API void ggml_backend_buffer_clear (ggml_backend_buffer_t buffer, uint8_t value);
|
||||||
|
GGML_API bool ggml_backend_buffer_is_host (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API void ggml_backend_buffer_set_usage (ggml_backend_buffer_t buffer, enum ggml_backend_buffer_usage usage);
|
||||||
|
GGML_API enum ggml_backend_buffer_usage ggml_backend_buffer_get_usage (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API ggml_backend_buffer_type_t ggml_backend_buffer_get_type (ggml_backend_buffer_t buffer);
|
||||||
|
GGML_API void ggml_backend_buffer_reset (ggml_backend_buffer_t buffer);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend
|
||||||
|
//
|
||||||
|
|
||||||
|
GGML_API ggml_guid_t ggml_backend_guid(ggml_backend_t backend);
|
||||||
|
GGML_API const char * ggml_backend_name(ggml_backend_t backend);
|
||||||
|
GGML_API void ggml_backend_free(ggml_backend_t backend);
|
||||||
|
|
||||||
|
GGML_API ggml_backend_buffer_type_t ggml_backend_get_default_buffer_type(ggml_backend_t backend);
|
||||||
|
GGML_API ggml_backend_buffer_t ggml_backend_alloc_buffer(ggml_backend_t backend, size_t size);
|
||||||
|
GGML_API size_t ggml_backend_get_alignment(ggml_backend_t backend);
|
||||||
|
GGML_API size_t ggml_backend_get_max_size(ggml_backend_t backend);
|
||||||
|
|
||||||
|
GGML_API void ggml_backend_tensor_set_async(ggml_backend_t backend, struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
|
||||||
|
GGML_API void ggml_backend_tensor_get_async(ggml_backend_t backend, const struct ggml_tensor * tensor, void * data, size_t offset, size_t size);
|
||||||
|
|
||||||
|
// "offset" refers to the offset of the tensor data for setting/getting data
|
||||||
|
GGML_API GGML_CALL void ggml_backend_tensor_set( struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
|
||||||
|
GGML_API GGML_CALL void ggml_backend_tensor_get(const struct ggml_tensor * tensor, void * data, size_t offset, size_t size);
|
||||||
|
|
||||||
|
GGML_API void ggml_backend_synchronize(ggml_backend_t backend);
|
||||||
|
|
||||||
|
GGML_API ggml_backend_graph_plan_t ggml_backend_graph_plan_create(ggml_backend_t backend, struct ggml_cgraph * cgraph);
|
||||||
|
GGML_API void ggml_backend_graph_plan_free (ggml_backend_t backend, ggml_backend_graph_plan_t plan);
|
||||||
|
|
||||||
|
GGML_API enum ggml_status ggml_backend_graph_plan_compute (ggml_backend_t backend, ggml_backend_graph_plan_t plan);
|
||||||
|
GGML_API enum ggml_status ggml_backend_graph_compute (ggml_backend_t backend, struct ggml_cgraph * cgraph);
|
||||||
|
GGML_API enum ggml_status ggml_backend_graph_compute_async(ggml_backend_t backend, struct ggml_cgraph * cgraph);
|
||||||
|
GGML_API bool ggml_backend_supports_op(ggml_backend_t backend, const struct ggml_tensor * op);
|
||||||
|
GGML_API bool ggml_backend_supports_buft(ggml_backend_t backend, ggml_backend_buffer_type_t buft);
|
||||||
|
GGML_API bool ggml_backend_offload_op(ggml_backend_t backend, const struct ggml_tensor * op);
|
||||||
|
|
||||||
|
// tensor copy between different backends
|
||||||
|
GGML_API void ggml_backend_tensor_copy(struct ggml_tensor * src, struct ggml_tensor * dst);
|
||||||
|
|
||||||
|
// asynchronous copy
|
||||||
|
// the copy is performed after all the currently queued operations in backend_src
|
||||||
|
// backend_dst will wait for the copy to complete before performing other operations
|
||||||
|
// automatic fallback to sync copy if async is not supported
|
||||||
|
GGML_API void ggml_backend_tensor_copy_async(ggml_backend_t backend_src, ggml_backend_t backend_dst, struct ggml_tensor * src, struct ggml_tensor * dst);
|
||||||
|
|
||||||
|
// events
|
||||||
|
GGML_API ggml_backend_event_t ggml_backend_event_new (ggml_backend_t backend);
|
||||||
|
GGML_API void ggml_backend_event_free (ggml_backend_event_t event);
|
||||||
|
GGML_API void ggml_backend_event_record (ggml_backend_event_t event);
|
||||||
|
GGML_API void ggml_backend_event_synchronize(ggml_backend_event_t event);
|
||||||
|
GGML_API void ggml_backend_event_wait (ggml_backend_t backend, ggml_backend_event_t event);
|
||||||
|
|
||||||
|
//
|
||||||
|
// CPU backend
|
||||||
|
//
|
||||||
|
|
||||||
|
GGML_API ggml_backend_t ggml_backend_cpu_init(void);
|
||||||
|
|
||||||
|
GGML_API GGML_CALL bool ggml_backend_is_cpu (ggml_backend_t backend);
|
||||||
|
GGML_API void ggml_backend_cpu_set_n_threads (ggml_backend_t backend_cpu, int n_threads);
|
||||||
|
GGML_API void ggml_backend_cpu_set_threadpool (ggml_backend_t backend_cpu, ggml_threadpool_t threadpool);
|
||||||
|
GGML_API void ggml_backend_cpu_set_abort_callback(ggml_backend_t backend_cpu, ggml_abort_callback abort_callback, void * abort_callback_data);
|
||||||
|
|
||||||
|
// Create a backend buffer from an existing pointer
|
||||||
|
GGML_API GGML_CALL ggml_backend_buffer_t ggml_backend_cpu_buffer_from_ptr(void * ptr, size_t size);
|
||||||
|
|
||||||
|
GGML_API GGML_CALL ggml_backend_buffer_type_t ggml_backend_cpu_buffer_type(void);
|
||||||
|
|
||||||
|
#ifdef GGML_USE_CPU_HBM
|
||||||
|
GGML_API ggml_backend_buffer_type_t ggml_backend_cpu_hbm_buffer_type(void);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend registry
|
||||||
|
//
|
||||||
|
|
||||||
|
// The backend registry is a registry of all the available backends, and allows initializing backends in a generic way
|
||||||
|
|
||||||
|
GGML_API size_t ggml_backend_reg_get_count(void);
|
||||||
|
GGML_API size_t ggml_backend_reg_find_by_name(const char * name);
|
||||||
|
GGML_API ggml_backend_t ggml_backend_reg_init_backend_from_str(const char * backend_str); // str is backend_name:params (params is optional)
|
||||||
|
GGML_API const char * ggml_backend_reg_get_name(size_t i);
|
||||||
|
GGML_API ggml_backend_t ggml_backend_reg_init_backend(size_t i, const char * params); // params is backend-specific
|
||||||
|
GGML_API ggml_backend_buffer_type_t ggml_backend_reg_get_default_buffer_type(size_t i);
|
||||||
|
GGML_API ggml_backend_buffer_t ggml_backend_reg_alloc_buffer(size_t i, size_t size);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Backend scheduler
|
||||||
|
//
|
||||||
|
|
||||||
|
// The backend scheduler allows for multiple backends to be used together
|
||||||
|
// Handles compute buffer allocation, assignment of tensors to backends, and copying of tensors between backends
|
||||||
|
// The backends are selected based on:
|
||||||
|
// - the backend that supports the operation
|
||||||
|
// - the location of the pre-allocated tensors (e.g. the weights)
|
||||||
|
/*
|
||||||
|
Example usage:
|
||||||
|
|
||||||
|
// operations that use tensors allocated in a buffer with USAGE_WEIGHTS will be assigned
|
||||||
|
// preferrably to run on the same backend as the buffer
|
||||||
|
ggml_backend_buffer_set_usage(buf_weights, GGML_BACKEND_BUFFER_USAGE_WEIGHTS);
|
||||||
|
|
||||||
|
sched = ggml_backend_sched_new({backend_gpu, backend_gpu2, backend_cpu}, NULL, num_backends, GGML_DEFAULT_GRAPH_SIZE, false);
|
||||||
|
|
||||||
|
// initialize buffers from a max size graph (optional)
|
||||||
|
reserve_graph = build_graph(sched, max_batch_size);
|
||||||
|
|
||||||
|
// manually assign nodes to a backend (optional, should not be needed in most cases)
|
||||||
|
struct ggml_tensor * node = ggml_mul_mat(ctx, ...);
|
||||||
|
ggml_backend_sched_set_tensor_backend(sched, node, backend_gpu);
|
||||||
|
|
||||||
|
ggml_backend_sched_reserve(sched, reserve_graph);
|
||||||
|
|
||||||
|
// compute
|
||||||
|
graph = build_graph(sched);
|
||||||
|
ggml_backend_sched_graph_compute(sched, graph);
|
||||||
|
|
||||||
|
// if there are graph inputs:
|
||||||
|
ggml_backend_sched_reset(sched);
|
||||||
|
ggml_backend_sched_alloc_graph(sched, graph);
|
||||||
|
ggml_backend_tensor_set(input_tensor, ...);
|
||||||
|
ggml_backend_sched_graph_compute(sched, graph);
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
struct ggml_backend_sched;
|
||||||
|
typedef struct ggml_backend_sched * ggml_backend_sched_t;
|
||||||
|
|
||||||
|
// when ask == true, the scheduler wants to know if the user wants to observe this node
|
||||||
|
// this allows the scheduler to batch nodes together in order to evaluate them in a single call
|
||||||
|
//
|
||||||
|
// when ask == false, the scheduler is passing the node tensor to the user for observation
|
||||||
|
// if the user returns false, the scheduler will cancel the graph compute
|
||||||
|
//
|
||||||
|
typedef bool (*ggml_backend_sched_eval_callback)(struct ggml_tensor * t, bool ask, void * user_data);
|
||||||
|
|
||||||
|
// Initialize a backend scheduler
|
||||||
|
GGML_API ggml_backend_sched_t ggml_backend_sched_new(ggml_backend_t * backends, ggml_backend_buffer_type_t * bufts, int n_backends, size_t graph_size, bool parallel);
|
||||||
|
GGML_API void ggml_backend_sched_free(ggml_backend_sched_t sched);
|
||||||
|
|
||||||
|
// Initialize backend buffers from a measure graph
|
||||||
|
GGML_API bool ggml_backend_sched_reserve(ggml_backend_sched_t sched, struct ggml_cgraph * measure_graph);
|
||||||
|
|
||||||
|
GGML_API int ggml_backend_sched_get_n_backends(ggml_backend_sched_t sched);
|
||||||
|
GGML_API ggml_backend_t ggml_backend_sched_get_backend(ggml_backend_sched_t sched, int i);
|
||||||
|
|
||||||
|
// Get the number of splits of the last graph
|
||||||
|
GGML_API int ggml_backend_sched_get_n_splits(ggml_backend_sched_t sched);
|
||||||
|
GGML_API int ggml_backend_sched_get_n_copies(ggml_backend_sched_t sched);
|
||||||
|
|
||||||
|
GGML_API size_t ggml_backend_sched_get_buffer_size(ggml_backend_sched_t sched, ggml_backend_t backend);
|
||||||
|
|
||||||
|
GGML_API void ggml_backend_sched_set_tensor_backend(ggml_backend_sched_t sched, struct ggml_tensor * node, ggml_backend_t backend);
|
||||||
|
GGML_API ggml_backend_t ggml_backend_sched_get_tensor_backend(ggml_backend_sched_t sched, struct ggml_tensor * node);
|
||||||
|
|
||||||
|
// Allocate and compute graph on the backend scheduler
|
||||||
|
GGML_API bool ggml_backend_sched_alloc_graph(ggml_backend_sched_t sched, struct ggml_cgraph * graph);
|
||||||
|
GGML_API enum ggml_status ggml_backend_sched_graph_compute(ggml_backend_sched_t sched, struct ggml_cgraph * graph);
|
||||||
|
GGML_API enum ggml_status ggml_backend_sched_graph_compute_async(ggml_backend_sched_t sched, struct ggml_cgraph * graph);
|
||||||
|
GGML_API void ggml_backend_sched_synchronize(ggml_backend_sched_t sched);
|
||||||
|
|
||||||
|
// Reset all assignments and allocators - must be called before changing the node backends
|
||||||
|
GGML_API void ggml_backend_sched_reset(ggml_backend_sched_t sched);
|
||||||
|
|
||||||
|
// Set a callback to be called for each resulting node during graph compute
|
||||||
|
GGML_API void ggml_backend_sched_set_eval_callback(ggml_backend_sched_t sched, ggml_backend_sched_eval_callback callback, void * user_data);
|
||||||
|
|
||||||
|
//
|
||||||
|
// Utils
|
||||||
|
//
|
||||||
|
|
||||||
|
struct ggml_backend_graph_copy {
|
||||||
|
ggml_backend_buffer_t buffer;
|
||||||
|
struct ggml_context * ctx_allocated;
|
||||||
|
struct ggml_context * ctx_unallocated;
|
||||||
|
struct ggml_cgraph * graph;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Copy a graph to a different backend
|
||||||
|
GGML_API struct ggml_backend_graph_copy ggml_backend_graph_copy(ggml_backend_t backend, struct ggml_cgraph * graph);
|
||||||
|
GGML_API void ggml_backend_graph_copy_free(struct ggml_backend_graph_copy copy);
|
||||||
|
|
||||||
|
typedef bool (*GGML_CALL ggml_backend_eval_callback)(int node_index, struct ggml_tensor * t1, struct ggml_tensor * t2, void * user_data);
|
||||||
|
|
||||||
|
// Compare the output of two backends
|
||||||
|
GGML_API bool ggml_backend_compare_graph_backend(ggml_backend_t backend1, ggml_backend_t backend2, struct ggml_cgraph * graph, ggml_backend_eval_callback callback, void * user_data);
|
||||||
|
|
||||||
|
// Tensor initialization
|
||||||
|
GGML_API void ggml_backend_tensor_alloc(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor, void * addr);
|
||||||
|
GGML_API void ggml_backend_view_init(struct ggml_tensor * tensor);
|
||||||
|
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
397
llama/ggml-blas.cpp
Normal file
397
llama/ggml-blas.cpp
Normal file
|
@ -0,0 +1,397 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifdef GGML_USE_BLAS
|
||||||
|
|
||||||
|
#include "ggml-blas.h"
|
||||||
|
#include "ggml-backend-impl.h"
|
||||||
|
|
||||||
|
#include <future>
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
#if defined(GGML_USE_ACCELERATE)
|
||||||
|
# include <Accelerate/Accelerate.h>
|
||||||
|
#elif defined(GGML_BLAS_USE_MKL)
|
||||||
|
# include <mkl.h>
|
||||||
|
#elif defined(GGML_BLAS_USE_BLIS)
|
||||||
|
# include <blis.h>
|
||||||
|
#elif defined(GGML_BLAS_USE_NVPL)
|
||||||
|
# include <nvpl_blas.h>
|
||||||
|
#else
|
||||||
|
# include <cblas.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
struct ggml_backend_blas_context {
|
||||||
|
int n_threads = GGML_DEFAULT_N_THREADS;
|
||||||
|
std::unique_ptr<char[]> work_data;
|
||||||
|
size_t work_size = 0;
|
||||||
|
#ifndef GGML_USE_OPENMP
|
||||||
|
std::vector<std::future<void>> tasks;
|
||||||
|
#endif
|
||||||
|
};
|
||||||
|
|
||||||
|
// helper function to determine if it is better to use BLAS or not
|
||||||
|
// for large matrices, BLAS is faster
|
||||||
|
static bool ggml_backend_blas_use_blas(const struct ggml_tensor * dst) {
|
||||||
|
const struct ggml_tensor * src0 = dst->src[0];
|
||||||
|
const struct ggml_tensor * src1 = dst->src[1];
|
||||||
|
|
||||||
|
const int64_t ne10 = src1->ne[0];
|
||||||
|
|
||||||
|
const int64_t ne0 = dst->ne[0];
|
||||||
|
const int64_t ne1 = dst->ne[1];
|
||||||
|
|
||||||
|
// TODO: find the optimal values for these
|
||||||
|
if (ggml_is_contiguous(src0) &&
|
||||||
|
ggml_is_contiguous(src1) &&
|
||||||
|
src1->type == GGML_TYPE_F32 &&
|
||||||
|
(ne0 >= 32 && ne1 >= 32 && ne10 >= 32)) {
|
||||||
|
|
||||||
|
/*printf("BLAS: %d %d %d %d %d\n", ne0, ne1, ne10, ne00, ne01);*/
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_backend_blas_mul_mat(ggml_backend_blas_context * ctx, struct ggml_tensor * dst) {
|
||||||
|
const struct ggml_tensor * src0 = dst->src[0];
|
||||||
|
const struct ggml_tensor * src1 = dst->src[1];
|
||||||
|
|
||||||
|
GGML_TENSOR_BINARY_OP_LOCALS
|
||||||
|
|
||||||
|
const enum ggml_type type = src0->type;
|
||||||
|
|
||||||
|
GGML_ASSERT(ne0 == ne01);
|
||||||
|
GGML_ASSERT(ne1 == ne11);
|
||||||
|
GGML_ASSERT(ne2 == ne12);
|
||||||
|
GGML_ASSERT(ne3 == ne13);
|
||||||
|
|
||||||
|
// we don't support permuted src0 or src1
|
||||||
|
GGML_ASSERT(nb00 == ggml_type_size(type));
|
||||||
|
GGML_ASSERT(nb10 == ggml_type_size(src1->type));
|
||||||
|
|
||||||
|
// dst cannot be transposed or permuted
|
||||||
|
GGML_ASSERT(nb0 == sizeof(float));
|
||||||
|
GGML_ASSERT(nb0 <= nb1);
|
||||||
|
GGML_ASSERT(nb1 <= nb2);
|
||||||
|
GGML_ASSERT(nb2 <= nb3);
|
||||||
|
|
||||||
|
// broadcast factors
|
||||||
|
const int64_t r2 = ne12/ne02;
|
||||||
|
const int64_t r3 = ne13/ne03;
|
||||||
|
|
||||||
|
const int64_t ne_plane = ne01*ne00;
|
||||||
|
const size_t desired_wsize = type == GGML_TYPE_F32 ? 0 : ne03*ne02*ne_plane*sizeof(float);
|
||||||
|
|
||||||
|
if (ctx->work_size < desired_wsize) {
|
||||||
|
ctx->work_data.reset(new char[desired_wsize]);
|
||||||
|
ctx->work_size = desired_wsize;
|
||||||
|
}
|
||||||
|
void * wdata = ctx->work_data.get();
|
||||||
|
|
||||||
|
// convert src0 to float
|
||||||
|
if (type != GGML_TYPE_F32) {
|
||||||
|
ggml_type_traits_t type_traits = ggml_internal_get_type_traits(type);
|
||||||
|
ggml_to_float_t const to_float = type_traits.to_float;
|
||||||
|
|
||||||
|
for (int64_t i03 = 0; i03 < ne03; i03++) {
|
||||||
|
for (int64_t i02 = 0; i02 < ne02; i02++) {
|
||||||
|
const void * x = (char *) src0->data + i02*nb02 + i03*nb03;
|
||||||
|
float * const wplane = (float *) wdata + i02*ne_plane + i03*ne02*ne_plane;
|
||||||
|
|
||||||
|
const int min_cols_per_thread = 4096;
|
||||||
|
const int min_rows_per_thread = std::max((int)(min_cols_per_thread/ne00), 1);
|
||||||
|
const int n_threads = std::max(std::min(ctx->n_threads, (int)(ne01/min_rows_per_thread)), 1);
|
||||||
|
|
||||||
|
#ifdef GGML_USE_OPENMP
|
||||||
|
#pragma omp parallel for num_threads(n_threads)
|
||||||
|
for (int64_t i01 = 0; i01 < ne01; i01++) {
|
||||||
|
to_float((const char *) x + i01*nb01, wplane + i01*ne00, ne00);
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
for (int i = 1; i < n_threads; i++) {
|
||||||
|
const int64_t start = i*ne01/n_threads;
|
||||||
|
const int64_t end = (i + 1)*ne01/n_threads;
|
||||||
|
if (start < end) {
|
||||||
|
ctx->tasks.push_back(std::async(std::launch::async, [=]() {
|
||||||
|
for (int64_t i01 = start; i01 < end; i01++) {
|
||||||
|
to_float((const char *) x + i01*nb01, wplane + i01*ne00, ne00);
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
{
|
||||||
|
// reuse the current thread for the first task
|
||||||
|
const int64_t start = 0;
|
||||||
|
const int64_t end = ne01/n_threads;
|
||||||
|
for (int64_t i01 = start; i01 < end; i01++) {
|
||||||
|
to_float((const char *) x + i01*nb01, wplane + i01*ne00, ne00);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#ifndef GGML_USE_OPENMP
|
||||||
|
// wait for all tasks to finish
|
||||||
|
for (auto & task : ctx->tasks) {
|
||||||
|
task.get();
|
||||||
|
}
|
||||||
|
ctx->tasks.clear();
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
#if defined(OPENBLAS_VERSION)
|
||||||
|
openblas_set_num_threads(ctx->n_threads);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if defined(GGML_BLAS_USE_BLIS)
|
||||||
|
bli_thread_set_num_threads(ctx->n_threads);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if defined(GGML_BLAS_USE_NVPL)
|
||||||
|
nvpl_blas_set_num_threads(ctx->n_threads);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
for (int64_t i13 = 0; i13 < ne13; i13++) {
|
||||||
|
for (int64_t i12 = 0; i12 < ne12; i12++) {
|
||||||
|
const int64_t i03 = i13/r3;
|
||||||
|
const int64_t i02 = i12/r2;
|
||||||
|
|
||||||
|
const float * x = (float *) ((char *) src0->data + i02*nb02 + i03*nb03);
|
||||||
|
const float * y = (float *) ((char *) src1->data + i12*nb12 + i13*nb13);
|
||||||
|
float * d = (float *) ((char *) dst->data + i12*nb2 + i13*nb3);
|
||||||
|
|
||||||
|
if (type != GGML_TYPE_F32) {
|
||||||
|
x = (float *) wdata + i02*ne_plane + i03*ne02*ne_plane;
|
||||||
|
}
|
||||||
|
|
||||||
|
cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans,
|
||||||
|
ne1, ne01, ne10,
|
||||||
|
1.0f, y, ne10,
|
||||||
|
x, ne00,
|
||||||
|
0.0f, d, ne01);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_backend_blas_out_prod(ggml_backend_blas_context * ctx, struct ggml_tensor * dst) {
|
||||||
|
const struct ggml_tensor * src0 = dst->src[0];
|
||||||
|
const struct ggml_tensor * src1 = dst->src[1];
|
||||||
|
|
||||||
|
GGML_TENSOR_BINARY_OP_LOCALS
|
||||||
|
|
||||||
|
GGML_ASSERT(ne0 == ne00);
|
||||||
|
GGML_ASSERT(ne1 == ne10);
|
||||||
|
GGML_ASSERT(ne2 == ne02);
|
||||||
|
GGML_ASSERT(ne02 == ne12);
|
||||||
|
GGML_ASSERT(ne3 == ne13);
|
||||||
|
GGML_ASSERT(ne03 == ne13);
|
||||||
|
|
||||||
|
// we don't support permuted src0 or src1
|
||||||
|
GGML_ASSERT(nb00 == sizeof(float));
|
||||||
|
|
||||||
|
// dst cannot be transposed or permuted
|
||||||
|
GGML_ASSERT(nb0 == sizeof(float));
|
||||||
|
// GGML_ASSERT(nb0 <= nb1);
|
||||||
|
// GGML_ASSERT(nb1 <= nb2);
|
||||||
|
// GGML_ASSERT(nb2 <= nb3);
|
||||||
|
|
||||||
|
// Arguments to ggml_compute_forward_out_prod (expressed as major,minor)
|
||||||
|
// src0: (k,n)
|
||||||
|
// src1: (k,m)
|
||||||
|
// dst: (m,n)
|
||||||
|
//
|
||||||
|
// Arguments to sgemm (see https://github.com/Reference-LAPACK/lapack/blob/master/BLAS/SRC/sgemm.f)
|
||||||
|
// Also expressed as (major,minor)
|
||||||
|
// a: (m,k): so src1 transposed
|
||||||
|
// b: (k,n): so src0
|
||||||
|
// c: (m,n)
|
||||||
|
//
|
||||||
|
// However, if ggml_is_transposed(src1) is true, then
|
||||||
|
// src1->data already contains a transposed version, so sgemm mustn't
|
||||||
|
// transpose it further.
|
||||||
|
|
||||||
|
int n = src0->ne[0];
|
||||||
|
int k = src0->ne[1];
|
||||||
|
int m = src1->ne[0];
|
||||||
|
|
||||||
|
CBLAS_TRANSPOSE transposeA;
|
||||||
|
int lda;
|
||||||
|
|
||||||
|
if (!ggml_is_transposed(src1)) {
|
||||||
|
transposeA = CblasTrans;
|
||||||
|
lda = m;
|
||||||
|
} else {
|
||||||
|
transposeA = CblasNoTrans;
|
||||||
|
lda = k;
|
||||||
|
}
|
||||||
|
|
||||||
|
float * a = (float *) ((char *) src1->data);
|
||||||
|
float * b = (float *) ((char *) src0->data);
|
||||||
|
float * c = (float *) ((char *) dst->data);
|
||||||
|
|
||||||
|
cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n);
|
||||||
|
|
||||||
|
GGML_UNUSED(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
// backend interface
|
||||||
|
|
||||||
|
GGML_CALL static const char * ggml_backend_blas_name(ggml_backend_t backend) {
|
||||||
|
return "BLAS";
|
||||||
|
|
||||||
|
GGML_UNUSED(backend);
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_CALL static void ggml_backend_blas_free(ggml_backend_t backend) {
|
||||||
|
ggml_backend_blas_context * ctx = (ggml_backend_blas_context *)backend->context;
|
||||||
|
delete ctx;
|
||||||
|
delete backend;
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_CALL static ggml_backend_buffer_type_t ggml_backend_blas_get_default_buffer_type(ggml_backend_t backend) {
|
||||||
|
return ggml_backend_cpu_buffer_type();
|
||||||
|
|
||||||
|
GGML_UNUSED(backend);
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_CALL static enum ggml_status ggml_backend_blas_graph_compute(ggml_backend_t backend, struct ggml_cgraph * cgraph) {
|
||||||
|
ggml_backend_blas_context * ctx = (ggml_backend_blas_context *)backend->context;
|
||||||
|
|
||||||
|
for (int i = 0; i < cgraph->n_nodes; i++) {
|
||||||
|
struct ggml_tensor * node = cgraph->nodes[i];
|
||||||
|
|
||||||
|
switch (node->op) {
|
||||||
|
case GGML_OP_MUL_MAT:
|
||||||
|
ggml_backend_blas_mul_mat(ctx, node);
|
||||||
|
break;
|
||||||
|
|
||||||
|
case GGML_OP_OUT_PROD:
|
||||||
|
ggml_backend_blas_out_prod(ctx, node);
|
||||||
|
break;
|
||||||
|
|
||||||
|
case GGML_OP_NONE:
|
||||||
|
case GGML_OP_RESHAPE:
|
||||||
|
case GGML_OP_VIEW:
|
||||||
|
case GGML_OP_PERMUTE:
|
||||||
|
case GGML_OP_TRANSPOSE:
|
||||||
|
break;
|
||||||
|
|
||||||
|
default:
|
||||||
|
GGML_ABORT("%s: unsupported op %s\n", __func__, ggml_op_desc(node));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return GGML_STATUS_SUCCESS;
|
||||||
|
|
||||||
|
GGML_UNUSED(backend);
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_CALL static bool ggml_backend_blas_supports_op(ggml_backend_t backend, const struct ggml_tensor * op) {
|
||||||
|
const struct ggml_tensor * src0 = op->src[0];
|
||||||
|
const struct ggml_tensor * src1 = op->src[1];
|
||||||
|
|
||||||
|
return (op->op == GGML_OP_MUL_MAT && ggml_backend_blas_use_blas(op)) ||
|
||||||
|
(op->op == GGML_OP_OUT_PROD && op->src[0]->type == GGML_TYPE_F32 &&
|
||||||
|
op->src[1]->type == GGML_TYPE_F32 &&
|
||||||
|
ggml_is_matrix(src0) &&
|
||||||
|
ggml_is_matrix(src1) &&
|
||||||
|
ggml_is_contiguous(src0) &&
|
||||||
|
(ggml_is_contiguous(src1) || ggml_is_transposed(src1)));
|
||||||
|
|
||||||
|
GGML_UNUSED(backend);
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_CALL static bool ggml_backend_blas_supports_buft(ggml_backend_t backend, ggml_backend_buffer_type_t buft) {
|
||||||
|
return ggml_backend_buft_is_host(buft);
|
||||||
|
|
||||||
|
GGML_UNUSED(backend);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct ggml_backend_i blas_backend_i = {
|
||||||
|
/* .get_name = */ ggml_backend_blas_name,
|
||||||
|
/* .free = */ ggml_backend_blas_free,
|
||||||
|
/* .get_default_buffer_type = */ ggml_backend_blas_get_default_buffer_type,
|
||||||
|
/* .set_tensor_async = */ NULL,
|
||||||
|
/* .get_tensor_async = */ NULL,
|
||||||
|
/* .cpy_tensor_async = */ NULL,
|
||||||
|
/* .synchronize = */ NULL,
|
||||||
|
/* .graph_plan_create = */ NULL,
|
||||||
|
/* .graph_plan_free = */ NULL,
|
||||||
|
/* .graph_plan_update = */ NULL,
|
||||||
|
/* .graph_plan_compute = */ NULL,
|
||||||
|
/* .graph_compute = */ ggml_backend_blas_graph_compute,
|
||||||
|
/* .supports_op = */ ggml_backend_blas_supports_op,
|
||||||
|
/* .supports_buft = */ ggml_backend_blas_supports_buft,
|
||||||
|
/* .offload_op = */ NULL,
|
||||||
|
/* .event_new = */ NULL,
|
||||||
|
/* .event_free = */ NULL,
|
||||||
|
/* .event_record = */ NULL,
|
||||||
|
/* .event_wait = */ NULL,
|
||||||
|
/* .event_synchronize = */ NULL,
|
||||||
|
};
|
||||||
|
|
||||||
|
static ggml_guid_t ggml_backend_blas_guid(void) {
|
||||||
|
static ggml_guid guid = { 0x12, 0xa8, 0xae, 0xf4, 0xc0, 0x1e, 0x61, 0x97, 0x8f, 0xeb, 0x33, 0x04, 0xa1, 0x33, 0x51, 0x2d };
|
||||||
|
return &guid;
|
||||||
|
}
|
||||||
|
|
||||||
|
ggml_backend_t ggml_backend_blas_init(void) {
|
||||||
|
ggml_backend_blas_context * ctx = new ggml_backend_blas_context;
|
||||||
|
|
||||||
|
ggml_backend_t backend = new ggml_backend {
|
||||||
|
/* .guid = */ ggml_backend_blas_guid(),
|
||||||
|
/* .interface = */ blas_backend_i,
|
||||||
|
/* .context = */ ctx,
|
||||||
|
};
|
||||||
|
|
||||||
|
#if !defined(NDEBUG) && defined(OPENBLAS_VERSION) && defined(GGML_USE_OPENMP)
|
||||||
|
if (openblas_get_parallel() != OPENBLAS_OPENMP) {
|
||||||
|
fprintf(stderr, "%s: warning: ggml is using OpenMP, but OpenBLAS was compiled without OpenMP support\n", __func__);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if !defined(NDEBUG) && defined(BLIS_ENABLE_CBLAS) && defined(GGML_USE_OPENMP) && !defined(BLIS_ENABLE_OPENMP)
|
||||||
|
fprintf(stderr, "%s: warning: ggml is using OpenMP, but BLIS was compiled without OpenMP support\n", __func__);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
return backend;
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_CALL bool ggml_backend_is_blas(ggml_backend_t backend) {
|
||||||
|
return backend != NULL && ggml_guid_matches(backend->guid, ggml_backend_blas_guid());
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_backend_blas_set_n_threads(ggml_backend_t backend_blas, int n_threads) {
|
||||||
|
GGML_ASSERT(ggml_backend_is_blas(backend_blas));
|
||||||
|
|
||||||
|
ggml_backend_blas_context * ctx = (ggml_backend_blas_context *)backend_blas->context;
|
||||||
|
ctx->n_threads = n_threads;
|
||||||
|
}
|
||||||
|
|
||||||
|
#endif
|
49
llama/ggml-blas.h
Normal file
49
llama/ggml-blas.h
Normal file
|
@ -0,0 +1,49 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "ggml.h"
|
||||||
|
#include "ggml-backend.h"
|
||||||
|
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
// backend API
|
||||||
|
GGML_API GGML_CALL ggml_backend_t ggml_backend_blas_init(void);
|
||||||
|
|
||||||
|
GGML_API GGML_CALL bool ggml_backend_is_blas(ggml_backend_t backend);
|
||||||
|
|
||||||
|
// number of threads used for conversion to float
|
||||||
|
// for openblas and blis, this will also set the number of threads used for blas operations
|
||||||
|
GGML_API GGML_CALL void ggml_backend_blas_set_n_threads(ggml_backend_t backend_blas, int n_threads);
|
||||||
|
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
1859
llama/ggml-common.h
Normal file
1859
llama/ggml-common.h
Normal file
File diff suppressed because it is too large
Load diff
3147
llama/ggml-cuda.cu
Normal file
3147
llama/ggml-cuda.cu
Normal file
File diff suppressed because it is too large
Load diff
75
llama/ggml-cuda.h
Normal file
75
llama/ggml-cuda.h
Normal file
|
@ -0,0 +1,75 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "ggml.h"
|
||||||
|
#include "ggml-backend.h"
|
||||||
|
|
||||||
|
#ifdef GGML_USE_HIPBLAS
|
||||||
|
#define GGML_CUDA_NAME "ROCm"
|
||||||
|
#define GGML_CUBLAS_NAME "hipBLAS"
|
||||||
|
#elif defined(GGML_USE_MUSA)
|
||||||
|
#define GGML_CUDA_NAME "MUSA"
|
||||||
|
#define GGML_CUBLAS_NAME "muBLAS"
|
||||||
|
#else
|
||||||
|
#define GGML_CUDA_NAME "CUDA"
|
||||||
|
#define GGML_CUBLAS_NAME "cuBLAS"
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#define GGML_CUDA_MAX_DEVICES 16
|
||||||
|
|
||||||
|
// backend API
|
||||||
|
GGML_API GGML_CALL ggml_backend_t ggml_backend_cuda_init(int device);
|
||||||
|
|
||||||
|
GGML_API GGML_CALL bool ggml_backend_is_cuda(ggml_backend_t backend);
|
||||||
|
|
||||||
|
// device buffer
|
||||||
|
GGML_API GGML_CALL ggml_backend_buffer_type_t ggml_backend_cuda_buffer_type(int device);
|
||||||
|
|
||||||
|
// split tensor buffer that splits matrices by rows across multiple devices
|
||||||
|
GGML_API GGML_CALL ggml_backend_buffer_type_t ggml_backend_cuda_split_buffer_type(const float * tensor_split);
|
||||||
|
|
||||||
|
// pinned host buffer for use with the CPU backend for faster copies between CPU and GPU
|
||||||
|
GGML_API GGML_CALL ggml_backend_buffer_type_t ggml_backend_cuda_host_buffer_type(void);
|
||||||
|
|
||||||
|
GGML_API GGML_CALL int ggml_backend_cuda_reg_devices();
|
||||||
|
|
||||||
|
GGML_API GGML_CALL int ggml_backend_cuda_get_device_count(void);
|
||||||
|
GGML_API GGML_CALL void ggml_backend_cuda_get_device_description(int device, char * description, size_t description_size);
|
||||||
|
GGML_API GGML_CALL void ggml_backend_cuda_get_device_memory(int device, size_t * free, size_t * total);
|
||||||
|
|
||||||
|
GGML_API GGML_CALL bool ggml_backend_cuda_register_host_buffer(void * buffer, size_t size);
|
||||||
|
GGML_API GGML_CALL void ggml_backend_cuda_unregister_host_buffer(void * buffer);
|
||||||
|
|
||||||
|
GGML_API void ggml_backend_cuda_log_set_callback(ggml_log_callback log_callback, void * user_data);
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
73
llama/ggml-cuda/acc.cu
Normal file
73
llama/ggml-cuda/acc.cu
Normal file
|
@ -0,0 +1,73 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "acc.cuh"
|
||||||
|
|
||||||
|
static __global__ void acc_f32(const float * x, const float * y, float * dst, const int ne,
|
||||||
|
const int ne10, const int ne11, const int ne12,
|
||||||
|
const int nb1, const int nb2, int offset) {
|
||||||
|
const int i = blockDim.x * blockIdx.x + threadIdx.x;
|
||||||
|
if (i >= ne) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
int src1_idx = i - offset;
|
||||||
|
int oz = src1_idx / nb2;
|
||||||
|
int oy = (src1_idx - (oz * nb2)) / nb1;
|
||||||
|
int ox = src1_idx % nb1;
|
||||||
|
if (src1_idx >= 0 && ox < ne10 && oy < ne11 && oz < ne12) {
|
||||||
|
dst[i] = x[i] + y[ox + oy * ne10 + oz * ne10 * ne11];
|
||||||
|
} else {
|
||||||
|
dst[i] = x[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void acc_f32_cuda(const float * x, const float * y, float * dst, const int n_elements,
|
||||||
|
const int ne10, const int ne11, const int ne12,
|
||||||
|
const int nb1, const int nb2, const int offset, cudaStream_t stream) {
|
||||||
|
int num_blocks = (n_elements + CUDA_ACC_BLOCK_SIZE - 1) / CUDA_ACC_BLOCK_SIZE;
|
||||||
|
acc_f32<<<num_blocks, CUDA_ACC_BLOCK_SIZE, 0, stream>>>(x, y, dst, n_elements, ne10, ne11, ne12, nb1, nb2, offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_acc(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
const float * src1_d = (const float *)src1->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(dst->ne[3] == 1); // just 3D tensors supported
|
||||||
|
|
||||||
|
int nb1 = dst->op_params[0] / 4; // 4 bytes of float32
|
||||||
|
int nb2 = dst->op_params[1] / 4; // 4 bytes of float32
|
||||||
|
// int nb3 = dst->op_params[2] / 4; // 4 bytes of float32 - unused
|
||||||
|
int offset = dst->op_params[3] / 4; // offset in bytes
|
||||||
|
|
||||||
|
acc_f32_cuda(src0_d, src1_d, dst_d, ggml_nelements(dst), src1->ne[0], src1->ne[1], src1->ne[2], nb1, nb2, offset, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/acc.cuh
Normal file
31
llama/ggml-cuda/acc.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_ACC_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_acc(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
60
llama/ggml-cuda/arange.cu
Normal file
60
llama/ggml-cuda/arange.cu
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "arange.cuh"
|
||||||
|
|
||||||
|
static __global__ void arange_f32(float * dst, const int ne0, const float start, const float step) {
|
||||||
|
// blockIDx.x: idx of ne0 / BLOCK_SIZE
|
||||||
|
int nidx = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (nidx >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
dst[nidx] = start + step * nidx;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void arange_f32_cuda(float * dst, const int ne0, const float start, const float step, cudaStream_t stream) {
|
||||||
|
int num_blocks = (ne0 + CUDA_ARANGE_BLOCK_SIZE - 1) / CUDA_ARANGE_BLOCK_SIZE;
|
||||||
|
arange_f32<<<num_blocks, CUDA_ARANGE_BLOCK_SIZE, 0, stream>>>(dst, ne0, start, step);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_arange(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
float start;
|
||||||
|
float stop;
|
||||||
|
float step;
|
||||||
|
memcpy(&start, (float *)dst->op_params + 0, sizeof(float));
|
||||||
|
memcpy(&stop, (float *)dst->op_params + 1, sizeof(float));
|
||||||
|
memcpy(&step, (float *)dst->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
int64_t steps = (int64_t)ceil((stop - start) / step);
|
||||||
|
GGML_ASSERT(ggml_nelements(dst) == steps);
|
||||||
|
|
||||||
|
arange_f32_cuda(dst_d, dst->ne[0], start, step, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/arange.cuh
Normal file
31
llama/ggml-cuda/arange.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_ARANGE_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_arange(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
130
llama/ggml-cuda/argsort.cu
Normal file
130
llama/ggml-cuda/argsort.cu
Normal file
|
@ -0,0 +1,130 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "argsort.cuh"
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
static inline __device__ void ggml_cuda_swap(T & a, T & b) {
|
||||||
|
T tmp = a;
|
||||||
|
a = b;
|
||||||
|
b = tmp;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<ggml_sort_order order>
|
||||||
|
static __global__ void k_argsort_f32_i32(const float * x, int * dst, const int ncols, int ncols_pad) {
|
||||||
|
// bitonic sort
|
||||||
|
int col = threadIdx.x;
|
||||||
|
int row = blockIdx.y;
|
||||||
|
|
||||||
|
if (col >= ncols_pad) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float * x_row = x + row * ncols;
|
||||||
|
extern __shared__ int dst_row[];
|
||||||
|
|
||||||
|
// initialize indices
|
||||||
|
dst_row[col] = col;
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
for (int k = 2; k <= ncols_pad; k *= 2) {
|
||||||
|
for (int j = k / 2; j > 0; j /= 2) {
|
||||||
|
int ixj = col ^ j;
|
||||||
|
if (ixj > col) {
|
||||||
|
if ((col & k) == 0) {
|
||||||
|
if (dst_row[col] >= ncols ||
|
||||||
|
(dst_row[ixj] < ncols && (order == GGML_SORT_ORDER_ASC ?
|
||||||
|
x_row[dst_row[col]] > x_row[dst_row[ixj]] :
|
||||||
|
x_row[dst_row[col]] < x_row[dst_row[ixj]]))
|
||||||
|
) {
|
||||||
|
ggml_cuda_swap(dst_row[col], dst_row[ixj]);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (dst_row[ixj] >= ncols ||
|
||||||
|
(dst_row[col] < ncols && (order == GGML_SORT_ORDER_ASC ?
|
||||||
|
x_row[dst_row[col]] < x_row[dst_row[ixj]] :
|
||||||
|
x_row[dst_row[col]] > x_row[dst_row[ixj]]))
|
||||||
|
) {
|
||||||
|
ggml_cuda_swap(dst_row[col], dst_row[ixj]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// copy the result to dst without the padding
|
||||||
|
if (col < ncols) {
|
||||||
|
dst[row * ncols + col] = dst_row[col];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int next_power_of_2(int x) {
|
||||||
|
int n = 1;
|
||||||
|
while (n < x) {
|
||||||
|
n *= 2;
|
||||||
|
}
|
||||||
|
return n;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void argsort_f32_i32_cuda(const float * x, int * dst, const int ncols, const int nrows, ggml_sort_order order, cudaStream_t stream) {
|
||||||
|
// bitonic sort requires ncols to be power of 2
|
||||||
|
const int ncols_pad = next_power_of_2(ncols);
|
||||||
|
|
||||||
|
const dim3 block_dims(ncols_pad, 1, 1);
|
||||||
|
const dim3 block_nums(1, nrows, 1);
|
||||||
|
const size_t shared_mem = ncols_pad * sizeof(int);
|
||||||
|
|
||||||
|
// FIXME: this limit could be raised by ~2-4x on Ampere or newer
|
||||||
|
GGML_ASSERT(shared_mem <= ggml_cuda_info().devices[ggml_cuda_get_device()].smpb);
|
||||||
|
|
||||||
|
if (order == GGML_SORT_ORDER_ASC) {
|
||||||
|
k_argsort_f32_i32<GGML_SORT_ORDER_ASC><<<block_nums, block_dims, shared_mem, stream>>>(x, dst, ncols, ncols_pad);
|
||||||
|
} else if (order == GGML_SORT_ORDER_DESC) {
|
||||||
|
k_argsort_f32_i32<GGML_SORT_ORDER_DESC><<<block_nums, block_dims, shared_mem, stream>>>(x, dst, ncols, ncols_pad);
|
||||||
|
} else {
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_argsort(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_I32);
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
|
||||||
|
const int64_t ncols = src0->ne[0];
|
||||||
|
const int64_t nrows = ggml_nrows(src0);
|
||||||
|
|
||||||
|
enum ggml_sort_order order = (enum ggml_sort_order) dst->op_params[0];
|
||||||
|
|
||||||
|
argsort_f32_i32_cuda(src0_d, (int *)dst_d, ncols, nrows, order, stream);
|
||||||
|
}
|
29
llama/ggml-cuda/argsort.cuh
Normal file
29
llama/ggml-cuda/argsort.cuh
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_op_argsort(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
314
llama/ggml-cuda/binbcast.cu
Normal file
314
llama/ggml-cuda/binbcast.cu
Normal file
|
@ -0,0 +1,314 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "binbcast.cuh"
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float op_repeat(const float a, const float b) {
|
||||||
|
return b;
|
||||||
|
GGML_UNUSED(a);
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float op_add(const float a, const float b) {
|
||||||
|
return a + b;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float op_sub(const float a, const float b) {
|
||||||
|
return a - b;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float op_mul(const float a, const float b) {
|
||||||
|
return a * b;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float op_div(const float a, const float b) {
|
||||||
|
return a / b;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<float (*bin_op)(const float, const float), typename src0_t, typename src1_t, typename dst_t>
|
||||||
|
static __global__ void k_bin_bcast(const src0_t * src0, const src1_t * src1, dst_t * dst,
|
||||||
|
int ne0, int ne1, int ne2, int ne3,
|
||||||
|
int ne10, int ne11, int ne12, int ne13,
|
||||||
|
/*int s0, */ int s1, int s2, int s3,
|
||||||
|
/*int s00,*/ int s01, int s02, int s03,
|
||||||
|
/*int s10,*/ int s11, int s12, int s13) {
|
||||||
|
const int i0s = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
const int i1 = (blockDim.y*blockIdx.y + threadIdx.y);
|
||||||
|
const int i2 = (blockDim.z*blockIdx.z + threadIdx.z) / ne3;
|
||||||
|
const int i3 = (blockDim.z*blockIdx.z + threadIdx.z) % ne3;
|
||||||
|
|
||||||
|
if (i0s >= ne0 || i1 >= ne1 || i2 >= ne2 || i3 >= ne3) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i11 = i1 % ne11;
|
||||||
|
const int i12 = i2 % ne12;
|
||||||
|
const int i13 = i3 % ne13;
|
||||||
|
|
||||||
|
const size_t i_src0 = i3*s03 + i2*s02 + i1*s01;
|
||||||
|
const size_t i_src1 = i13*s13 + i12*s12 + i11*s11;
|
||||||
|
const size_t i_dst = i3*s3 + i2*s2 + i1*s1;
|
||||||
|
|
||||||
|
const src0_t * src0_row = src0 + i_src0;
|
||||||
|
const src1_t * src1_row = src1 + i_src1;
|
||||||
|
dst_t * dst_row = dst + i_dst;
|
||||||
|
|
||||||
|
for (int i0 = i0s; i0 < ne0; i0 += blockDim.x*gridDim.x) {
|
||||||
|
const int i10 = i0 % ne10;
|
||||||
|
dst_row[i0] = (dst_t)bin_op(src0 ? (float)src0_row[i0] : 0.0f, (float)src1_row[i10]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<float (*bin_op)(const float, const float), typename src0_t, typename src1_t, typename dst_t>
|
||||||
|
static __global__ void k_bin_bcast_unravel(const src0_t * src0, const src1_t * src1, dst_t * dst,
|
||||||
|
int ne0, int ne1, int ne2, int ne3,
|
||||||
|
int ne10, int ne11, int ne12, int ne13,
|
||||||
|
/*int s0, */ int s1, int s2, int s3,
|
||||||
|
/*int s00,*/ int s01, int s02, int s03,
|
||||||
|
/*int s10,*/ int s11, int s12, int s13) {
|
||||||
|
|
||||||
|
const int i = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
const int i3 = i/(ne2*ne1*ne0);
|
||||||
|
const int i2 = (i/(ne1*ne0)) % ne2;
|
||||||
|
const int i1 = (i/ne0) % ne1;
|
||||||
|
const int i0 = i % ne0;
|
||||||
|
|
||||||
|
if (i0 >= ne0 || i1 >= ne1 || i2 >= ne2 || i3 >= ne3) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i11 = i1 % ne11;
|
||||||
|
const int i12 = i2 % ne12;
|
||||||
|
const int i13 = i3 % ne13;
|
||||||
|
|
||||||
|
const size_t i_src0 = i3*s03 + i2*s02 + i1*s01;
|
||||||
|
const size_t i_src1 = i13*s13 + i12*s12 + i11*s11;
|
||||||
|
const size_t i_dst = i3*s3 + i2*s2 + i1*s1;
|
||||||
|
|
||||||
|
const src0_t * src0_row = src0 + i_src0;
|
||||||
|
const src1_t * src1_row = src1 + i_src1;
|
||||||
|
dst_t * dst_row = dst + i_dst;
|
||||||
|
|
||||||
|
const int i10 = i0 % ne10;
|
||||||
|
dst_row[i0] = (dst_t)bin_op(src0 ? (float)src0_row[i0] : 0.0f, (float)src1_row[i10]);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<float (*bin_op)(const float, const float)>
|
||||||
|
struct bin_bcast_cuda {
|
||||||
|
template<typename src0_t, typename src1_t, typename dst_t>
|
||||||
|
void operator()(const struct ggml_tensor * src0, const struct ggml_tensor * src1, struct ggml_tensor * dst,
|
||||||
|
const src0_t * src0_dd, const src1_t * src1_dd, dst_t * dst_dd,
|
||||||
|
cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_TENSOR_BINARY_OP_LOCALS
|
||||||
|
|
||||||
|
int nr0 = ne10/ne0;
|
||||||
|
int nr1 = ne11/ne1;
|
||||||
|
int nr2 = ne12/ne2;
|
||||||
|
int nr3 = ne13/ne3;
|
||||||
|
|
||||||
|
int nr[4] = { nr0, nr1, nr2, nr3 };
|
||||||
|
|
||||||
|
// collapse dimensions until first broadcast dimension
|
||||||
|
int64_t cne[] = {ne0, ne1, ne2, ne3};
|
||||||
|
int64_t cne0[] = {ne00, ne01, ne02, ne03};
|
||||||
|
int64_t cne1[] = {ne10, ne11, ne12, ne13};
|
||||||
|
|
||||||
|
size_t cnb[] = {nb0, nb1, nb2, nb3};
|
||||||
|
size_t cnb0[] = {nb00, nb01, nb02, nb03};
|
||||||
|
size_t cnb1[] = {nb10, nb11, nb12, nb13};
|
||||||
|
|
||||||
|
auto collapse = [](int64_t cne[]) {
|
||||||
|
cne[0] *= cne[1];
|
||||||
|
cne[1] = cne[2];
|
||||||
|
cne[2] = cne[3];
|
||||||
|
cne[3] = 1;
|
||||||
|
};
|
||||||
|
|
||||||
|
auto collapse_nb = [](size_t cnb[], const int64_t cne[]) {
|
||||||
|
cnb[1] *= cne[1];
|
||||||
|
cnb[2] *= cne[2];
|
||||||
|
cnb[3] *= cne[3];
|
||||||
|
};
|
||||||
|
|
||||||
|
if (ggml_is_contiguous(src0) && ggml_is_contiguous(src1) && ggml_is_contiguous(dst)) {
|
||||||
|
for (int i = 0; i < 4; i++) {
|
||||||
|
if (nr[i] != 1) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if (i > 0) {
|
||||||
|
collapse_nb(cnb, cne);
|
||||||
|
collapse_nb(cnb0, cne0);
|
||||||
|
collapse_nb(cnb1, cne1);
|
||||||
|
collapse(cne);
|
||||||
|
collapse(cne0);
|
||||||
|
collapse(cne1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
int64_t ne0 = cne[0];
|
||||||
|
int64_t ne1 = cne[1];
|
||||||
|
int64_t ne2 = cne[2];
|
||||||
|
int64_t ne3 = cne[3];
|
||||||
|
|
||||||
|
//int64_t ne00 = cne0[0]; GGML_UNUSED(ne00);
|
||||||
|
//int64_t ne01 = cne0[1]; GGML_UNUSED(ne01);
|
||||||
|
//int64_t ne02 = cne0[2]; GGML_UNUSED(ne02);
|
||||||
|
//int64_t ne03 = cne0[3]; GGML_UNUSED(ne03);
|
||||||
|
|
||||||
|
int64_t ne10 = cne1[0];
|
||||||
|
int64_t ne11 = cne1[1];
|
||||||
|
int64_t ne12 = cne1[2];
|
||||||
|
int64_t ne13 = cne1[3];
|
||||||
|
|
||||||
|
size_t nb0 = cnb[0];
|
||||||
|
size_t nb1 = cnb[1];
|
||||||
|
size_t nb2 = cnb[2];
|
||||||
|
size_t nb3 = cnb[3];
|
||||||
|
|
||||||
|
size_t nb00 = cnb0[0];
|
||||||
|
size_t nb01 = cnb0[1];
|
||||||
|
size_t nb02 = cnb0[2];
|
||||||
|
size_t nb03 = cnb0[3];
|
||||||
|
|
||||||
|
size_t nb10 = cnb1[0];
|
||||||
|
size_t nb11 = cnb1[1];
|
||||||
|
size_t nb12 = cnb1[2];
|
||||||
|
size_t nb13 = cnb1[3];
|
||||||
|
|
||||||
|
size_t s0 = nb0 / sizeof(dst_t);
|
||||||
|
size_t s1 = nb1 / sizeof(dst_t);
|
||||||
|
size_t s2 = nb2 / sizeof(dst_t);
|
||||||
|
size_t s3 = nb3 / sizeof(dst_t);
|
||||||
|
|
||||||
|
size_t s10 = nb10 / sizeof(src1_t);
|
||||||
|
size_t s11 = nb11 / sizeof(src1_t);
|
||||||
|
size_t s12 = nb12 / sizeof(src1_t);
|
||||||
|
size_t s13 = nb13 / sizeof(src1_t);
|
||||||
|
|
||||||
|
size_t s00 = nb00 / sizeof(src0_t);
|
||||||
|
size_t s01 = nb01 / sizeof(src0_t);
|
||||||
|
size_t s02 = nb02 / sizeof(src0_t);
|
||||||
|
size_t s03 = nb03 / sizeof(src0_t);
|
||||||
|
|
||||||
|
GGML_ASSERT(nb0 % sizeof(dst_t) == 0);
|
||||||
|
GGML_ASSERT(nb1 % sizeof(dst_t) == 0);
|
||||||
|
GGML_ASSERT(nb2 % sizeof(dst_t) == 0);
|
||||||
|
GGML_ASSERT(nb3 % sizeof(dst_t) == 0);
|
||||||
|
|
||||||
|
GGML_ASSERT(nb00 % sizeof(src0_t) == 0);
|
||||||
|
GGML_ASSERT(nb01 % sizeof(src0_t) == 0);
|
||||||
|
GGML_ASSERT(nb02 % sizeof(src0_t) == 0);
|
||||||
|
GGML_ASSERT(nb03 % sizeof(src0_t) == 0);
|
||||||
|
|
||||||
|
GGML_ASSERT(nb10 % sizeof(src1_t) == 0);
|
||||||
|
GGML_ASSERT(nb11 % sizeof(src1_t) == 0);
|
||||||
|
GGML_ASSERT(nb12 % sizeof(src1_t) == 0);
|
||||||
|
GGML_ASSERT(nb13 % sizeof(src1_t) == 0);
|
||||||
|
|
||||||
|
GGML_ASSERT(s0 == 1);
|
||||||
|
GGML_ASSERT(s00 == 1);
|
||||||
|
GGML_ASSERT(s10 == 1);
|
||||||
|
|
||||||
|
const int block_size = 128;
|
||||||
|
|
||||||
|
int64_t hne0 = std::max(ne0/2LL, 1LL);
|
||||||
|
|
||||||
|
dim3 block_dims;
|
||||||
|
block_dims.x = std::min<unsigned int>(hne0, block_size);
|
||||||
|
block_dims.y = std::min<unsigned int>(ne1, block_size / block_dims.x);
|
||||||
|
block_dims.z = std::min(std::min<unsigned int>(ne2*ne3, block_size / block_dims.x / block_dims.y), 64U);
|
||||||
|
|
||||||
|
dim3 block_nums(
|
||||||
|
(hne0 + block_dims.x - 1) / block_dims.x,
|
||||||
|
(ne1 + block_dims.y - 1) / block_dims.y,
|
||||||
|
(ne2*ne3 + block_dims.z - 1) / block_dims.z
|
||||||
|
);
|
||||||
|
|
||||||
|
if (block_nums.z > 65535) {
|
||||||
|
// this is the maximum number of blocks in z dimension, fallback to 1D grid kernel
|
||||||
|
int block_num = (ne0*ne1*ne2*ne3 + block_size - 1) / block_size;
|
||||||
|
k_bin_bcast_unravel<bin_op><<<block_num, block_size, 0, stream>>>(
|
||||||
|
src0_dd, src1_dd, dst_dd,
|
||||||
|
ne0, ne1, ne2, ne3,
|
||||||
|
ne10, ne11, ne12, ne13,
|
||||||
|
/* s0, */ s1, s2, s3,
|
||||||
|
/* s00, */ s01, s02, s03,
|
||||||
|
/* s10, */ s11, s12, s13);
|
||||||
|
} else {
|
||||||
|
k_bin_bcast<bin_op><<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
src0_dd, src1_dd, dst_dd,
|
||||||
|
ne0, ne1, ne2, ne3,
|
||||||
|
ne10, ne11, ne12, ne13,
|
||||||
|
/* s0, */ s1, s2, s3,
|
||||||
|
/* s00, */ s01, s02, s03,
|
||||||
|
/* s10, */ s11, s12, s13);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
template<class op>
|
||||||
|
static void ggml_cuda_op_bin_bcast(
|
||||||
|
const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst,
|
||||||
|
const void * src0_dd, const void * src1_dd, void * dst_dd, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
if (src0->type == GGML_TYPE_F32 && dst->type == GGML_TYPE_F32) {
|
||||||
|
op()(src0, src1, dst, (const float *)src0_dd, (const float *)src1_dd, (float *)dst_dd, stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F16 && dst->type == GGML_TYPE_F16) {
|
||||||
|
op()(src0, src1, dst, (const half *) src0_dd, (const float *)src1_dd, (half *) dst_dd, stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F16 && dst->type == GGML_TYPE_F32) {
|
||||||
|
op()(src0, src1, dst, (const half *) src0_dd, (const float *)src1_dd, (float *)dst_dd, stream);
|
||||||
|
} else {
|
||||||
|
fprintf(stderr, "%s: unsupported types: dst: %s, src0: %s, src1: %s\n", __func__,
|
||||||
|
ggml_type_name(dst->type), ggml_type_name(src0->type), ggml_type_name(src1->type));
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_repeat(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_cuda_op_bin_bcast<bin_bcast_cuda<op_repeat>>(dst, dst->src[0], dst, nullptr, dst->src[0]->data, dst->data, ctx.stream());
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_add(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_cuda_op_bin_bcast<bin_bcast_cuda<op_add>>(dst->src[0], dst->src[1], dst, dst->src[0]->data, dst->src[1]->data, dst->data, ctx.stream());
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_sub(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_cuda_op_bin_bcast<bin_bcast_cuda<op_sub>>(dst->src[0], dst->src[1], dst, dst->src[0]->data, dst->src[1]->data, dst->data, ctx.stream());
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_mul(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_cuda_op_bin_bcast<bin_bcast_cuda<op_mul>>(dst->src[0], dst->src[1], dst, dst->src[0]->data, dst->src[1]->data, dst->data, ctx.stream());
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_div(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_cuda_op_bin_bcast<bin_bcast_cuda<op_div>>(dst->src[0], dst->src[1], dst, dst->src[0]->data, dst->src[1]->data, dst->data, ctx.stream());
|
||||||
|
}
|
33
llama/ggml-cuda/binbcast.cuh
Normal file
33
llama/ggml-cuda/binbcast.cuh
Normal file
|
@ -0,0 +1,33 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_op_repeat(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
void ggml_cuda_op_add(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
void ggml_cuda_op_sub(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
void ggml_cuda_op_mul(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
void ggml_cuda_op_div(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
60
llama/ggml-cuda/clamp.cu
Normal file
60
llama/ggml-cuda/clamp.cu
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "clamp.cuh"
|
||||||
|
|
||||||
|
static __global__ void clamp_f32(const float * x, float * dst, const float min, const float max, const int k) {
|
||||||
|
const int i = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (i >= k) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
dst[i] = x[i] < min ? min : (x[i] > max ? max : x[i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void clamp_f32_cuda(const float * x, float * dst, const float min, const float max, const int k, cudaStream_t stream) {
|
||||||
|
const int num_blocks = (k + CUDA_CLAMP_BLOCK_SIZE - 1) / CUDA_CLAMP_BLOCK_SIZE;
|
||||||
|
clamp_f32<<<num_blocks, CUDA_CLAMP_BLOCK_SIZE, 0, stream>>>(x, dst, min, max, k);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void ggml_cuda_op_clamp(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
float min;
|
||||||
|
float max;
|
||||||
|
memcpy(&min, dst->op_params, sizeof(float));
|
||||||
|
memcpy(&max, (float *) dst->op_params + 1, sizeof(float));
|
||||||
|
|
||||||
|
clamp_f32_cuda(src0_d, dst_d, min, max, ggml_nelements(src0), stream);
|
||||||
|
}
|
31
llama/ggml-cuda/clamp.cuh
Normal file
31
llama/ggml-cuda/clamp.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_CLAMP_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_clamp(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
695
llama/ggml-cuda/common.cuh
Normal file
695
llama/ggml-cuda/common.cuh
Normal file
|
@ -0,0 +1,695 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "ggml.h"
|
||||||
|
#include "ggml-cuda.h"
|
||||||
|
|
||||||
|
#include <cstdint>
|
||||||
|
#include <memory>
|
||||||
|
|
||||||
|
#if defined(GGML_USE_HIPBLAS)
|
||||||
|
#define GGML_COMMON_DECL_HIP
|
||||||
|
#define GGML_COMMON_IMPL_HIP
|
||||||
|
#else
|
||||||
|
#define GGML_COMMON_DECL_CUDA
|
||||||
|
#define GGML_COMMON_IMPL_CUDA
|
||||||
|
#if defined(GGML_USE_MUSA)
|
||||||
|
#define GGML_COMMON_DECL_MUSA
|
||||||
|
#define GGML_COMMON_IMPL_MUSA
|
||||||
|
#endif
|
||||||
|
#endif
|
||||||
|
#include "ggml-common.h"
|
||||||
|
|
||||||
|
#include <cstdio>
|
||||||
|
#include <array>
|
||||||
|
#include <cassert>
|
||||||
|
#include <cfloat>
|
||||||
|
#include <string>
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
#if defined(GGML_USE_HIPBLAS)
|
||||||
|
#include "vendors/hip.h"
|
||||||
|
#elif defined(GGML_USE_MUSA)
|
||||||
|
#include "vendors/musa.h"
|
||||||
|
#else
|
||||||
|
#include "vendors/cuda.h"
|
||||||
|
#endif // defined(GGML_USE_HIPBLAS)
|
||||||
|
|
||||||
|
#define STRINGIZE_IMPL(...) #__VA_ARGS__
|
||||||
|
#define STRINGIZE(...) STRINGIZE_IMPL(__VA_ARGS__)
|
||||||
|
|
||||||
|
#define WARP_SIZE 32
|
||||||
|
#define CUDART_HMAX 11070 // CUDA 11.7, min. ver. for which __hmax and __hmax2 are known to work (may be higher than needed)
|
||||||
|
#define CUDART_HMASK 12000 // CUDA 12.0, min. ver. for half2 -> uint mask comparisons
|
||||||
|
|
||||||
|
#define CC_PASCAL 600
|
||||||
|
#define MIN_CC_DP4A 610 // minimum compute capability for __dp4a, an intrinsic for byte-wise dot products
|
||||||
|
#define CC_VOLTA 700
|
||||||
|
#define CC_TURING 750
|
||||||
|
#define CC_AMPERE 800
|
||||||
|
#define CC_OFFSET_AMD 1000000
|
||||||
|
#define CC_RDNA1 (CC_OFFSET_AMD + 1010)
|
||||||
|
#define CC_RDNA2 (CC_OFFSET_AMD + 1030)
|
||||||
|
#define CC_RDNA3 (CC_OFFSET_AMD + 1100)
|
||||||
|
|
||||||
|
#define MATRIX_ROW_PADDING 512 // last row of quant. matrices is a multiple of this to avoid out-of-bounds memory accesses
|
||||||
|
|
||||||
|
#if defined(_MSC_VER)
|
||||||
|
#pragma warning(disable: 4244 4267) // possible loss of data
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#define GGML_CUDA_MAX_STREAMS 8
|
||||||
|
|
||||||
|
[[noreturn]]
|
||||||
|
void ggml_cuda_error(const char * stmt, const char * func, const char * file, int line, const char * msg);
|
||||||
|
|
||||||
|
#define CUDA_CHECK_GEN(err, success, error_fn) \
|
||||||
|
do { \
|
||||||
|
auto err_ = (err); \
|
||||||
|
if (err_ != (success)) { \
|
||||||
|
ggml_cuda_error(#err, __func__, __FILE__, __LINE__, error_fn(err_)); \
|
||||||
|
} \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
|
#define CUDA_CHECK(err) CUDA_CHECK_GEN(err, cudaSuccess, cudaGetErrorString)
|
||||||
|
|
||||||
|
#if CUDART_VERSION >= 12000 || defined(GGML_USE_MUSA)
|
||||||
|
static const char * cublas_get_error_str(const cublasStatus_t err) {
|
||||||
|
return cublasGetStatusString(err);
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
static const char * cublas_get_error_str(const cublasStatus_t err) {
|
||||||
|
switch (err) {
|
||||||
|
case CUBLAS_STATUS_SUCCESS: return "CUBLAS_STATUS_SUCCESS";
|
||||||
|
case CUBLAS_STATUS_NOT_INITIALIZED: return "CUBLAS_STATUS_NOT_INITIALIZED";
|
||||||
|
case CUBLAS_STATUS_ALLOC_FAILED: return "CUBLAS_STATUS_ALLOC_FAILED";
|
||||||
|
case CUBLAS_STATUS_INVALID_VALUE: return "CUBLAS_STATUS_INVALID_VALUE";
|
||||||
|
case CUBLAS_STATUS_ARCH_MISMATCH: return "CUBLAS_STATUS_ARCH_MISMATCH";
|
||||||
|
case CUBLAS_STATUS_MAPPING_ERROR: return "CUBLAS_STATUS_MAPPING_ERROR";
|
||||||
|
case CUBLAS_STATUS_EXECUTION_FAILED: return "CUBLAS_STATUS_EXECUTION_FAILED";
|
||||||
|
case CUBLAS_STATUS_INTERNAL_ERROR: return "CUBLAS_STATUS_INTERNAL_ERROR";
|
||||||
|
case CUBLAS_STATUS_NOT_SUPPORTED: return "CUBLAS_STATUS_NOT_SUPPORTED";
|
||||||
|
default: return "unknown error";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#endif // CUDART_VERSION >= 12000
|
||||||
|
|
||||||
|
#define CUBLAS_CHECK(err) CUDA_CHECK_GEN(err, CUBLAS_STATUS_SUCCESS, cublas_get_error_str)
|
||||||
|
|
||||||
|
#if !defined(GGML_USE_HIPBLAS)
|
||||||
|
static const char * cu_get_error_str(CUresult err) {
|
||||||
|
const char * err_str;
|
||||||
|
cuGetErrorString(err, &err_str);
|
||||||
|
return err_str;
|
||||||
|
}
|
||||||
|
#define CU_CHECK(err) CUDA_CHECK_GEN(err, CUDA_SUCCESS, cu_get_error_str)
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if CUDART_VERSION >= 11100 || defined(GGML_USE_MUSA)
|
||||||
|
#define GGML_CUDA_ASSUME(x) __builtin_assume(x)
|
||||||
|
#else
|
||||||
|
#define GGML_CUDA_ASSUME(x)
|
||||||
|
#endif // CUDART_VERSION >= 11100
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
typedef half dfloat; // dequantize float
|
||||||
|
typedef half2 dfloat2;
|
||||||
|
#else
|
||||||
|
typedef float dfloat; // dequantize float
|
||||||
|
typedef float2 dfloat2;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
|
||||||
|
#if (defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) || __CUDA_ARCH__ >= CC_PASCAL
|
||||||
|
#define FP16_AVAILABLE
|
||||||
|
#endif // (defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) || __CUDA_ARCH__ >= CC_PASCAL
|
||||||
|
|
||||||
|
#if defined(FP16_AVAILABLE) && __CUDA_ARCH__ != 610
|
||||||
|
#define FAST_FP16_AVAILABLE
|
||||||
|
#endif // defined(FP16_AVAILABLE) && __CUDA_ARCH__ != 610
|
||||||
|
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && __CUDA_ARCH__ >= CC_VOLTA
|
||||||
|
#define FP16_MMA_AVAILABLE
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && __CUDA_ARCH__ >= CC_VOLTA
|
||||||
|
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && __CUDA_ARCH__ >= CC_TURING
|
||||||
|
#define INT8_MMA_AVAILABLE
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && __CUDA_ARCH__ >= CC_TURING
|
||||||
|
|
||||||
|
static constexpr bool fast_fp16_available(const int cc) {
|
||||||
|
return cc >= CC_PASCAL && cc != 610;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool fp16_mma_available(const int cc) {
|
||||||
|
return cc < CC_OFFSET_AMD && cc >= CC_VOLTA;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool int8_mma_available(const int cc) {
|
||||||
|
return cc < CC_OFFSET_AMD && cc >= CC_TURING;
|
||||||
|
}
|
||||||
|
|
||||||
|
[[noreturn]]
|
||||||
|
static __device__ void no_device_code(
|
||||||
|
const char * file_name, const int line, const char * function_name, const int arch, const char * arch_list) {
|
||||||
|
|
||||||
|
#if defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
printf("%s:%d: ERROR: HIP kernel %s has no device code compatible with HIP arch %d.\n",
|
||||||
|
file_name, line, function_name, arch);
|
||||||
|
GGML_UNUSED(arch_list);
|
||||||
|
#else
|
||||||
|
printf("%s:%d: ERROR: CUDA kernel %s has no device code compatible with CUDA arch %d. ggml-cuda.cu was compiled for: %s\n",
|
||||||
|
file_name, line, function_name, arch, arch_list);
|
||||||
|
#endif // defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
__trap();
|
||||||
|
|
||||||
|
GGML_UNUSED(no_device_code); // suppress unused function warning
|
||||||
|
}
|
||||||
|
|
||||||
|
#ifdef __CUDA_ARCH__
|
||||||
|
#define NO_DEVICE_CODE no_device_code(__FILE__, __LINE__, __FUNCTION__, __CUDA_ARCH__, STRINGIZE(__CUDA_ARCH_LIST__))
|
||||||
|
#else
|
||||||
|
#define NO_DEVICE_CODE //GGML_ABORT("NO_DEVICE_CODE not valid in host code.")
|
||||||
|
#endif // __CUDA_ARCH__
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float warp_reduce_sum(float x) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = 16; mask > 0; mask >>= 1) {
|
||||||
|
x += __shfl_xor_sync(0xffffffff, x, mask, 32);
|
||||||
|
}
|
||||||
|
return x;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float2 warp_reduce_sum(float2 a) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = 16; mask > 0; mask >>= 1) {
|
||||||
|
a.x += __shfl_xor_sync(0xffffffff, a.x, mask, 32);
|
||||||
|
a.y += __shfl_xor_sync(0xffffffff, a.y, mask, 32);
|
||||||
|
}
|
||||||
|
return a;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ half2 warp_reduce_sum(half2 a) {
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
|
||||||
|
#if defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = 16; mask > 0; mask >>= 1) {
|
||||||
|
const half2 a_other = __shfl_xor_sync(0xffffffff, a, mask, 32);
|
||||||
|
reinterpret_cast<half&>(a.x) += __low2half(a_other);
|
||||||
|
reinterpret_cast<half&>(a.y) += __high2half(a_other);
|
||||||
|
}
|
||||||
|
return a;
|
||||||
|
#else
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = 16; mask > 0; mask >>= 1) {
|
||||||
|
a = __hadd2(a, __shfl_xor_sync(0xffffffff, a, mask, 32));
|
||||||
|
}
|
||||||
|
return a;
|
||||||
|
#endif // defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
|
||||||
|
#else
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
return a;
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float warp_reduce_max(float x) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = 16; mask > 0; mask >>= 1) {
|
||||||
|
x = fmaxf(x, __shfl_xor_sync(0xffffffff, x, mask, 32));
|
||||||
|
}
|
||||||
|
return x;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ half ggml_cuda_hmax(const half a, const half b) {
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && CUDART_VERSION < CUDART_HMAX
|
||||||
|
return __float2half(fmaxf(__half2float(a), __half2float(b)));
|
||||||
|
#else
|
||||||
|
return __hmax(a, b);
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && CUDART_VERSION < CUDART_HMAX
|
||||||
|
|
||||||
|
#else
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
GGML_UNUSED(b);
|
||||||
|
return a;
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ half2 ggml_cuda_hmax2(const half2 a, const half2 b) {
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
|
||||||
|
#if CUDART_VERSION >= CUDART_HMAX
|
||||||
|
return __hmax2(a, b);
|
||||||
|
#else
|
||||||
|
half2 ret;
|
||||||
|
reinterpret_cast<half&>(ret.x) = __float2half(fmaxf( __low2float(a), __low2float(b)));
|
||||||
|
reinterpret_cast<half&>(ret.y) = __float2half(fmaxf(__high2float(a), __high2float(b)));
|
||||||
|
return ret;
|
||||||
|
#endif // CUDART_VERSION >= CUDART_HMAX
|
||||||
|
|
||||||
|
#else
|
||||||
|
GGML_UNUSED(a);
|
||||||
|
GGML_UNUSED(b);
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ half2 warp_reduce_max(half2 x) {
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && __CUDA_ARCH__ >= CC_PASCAL
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = 16; mask > 0; mask >>= 1) {
|
||||||
|
x = ggml_cuda_hmax2(x, __shfl_xor_sync(0xffffffff, x, mask, 32));
|
||||||
|
}
|
||||||
|
return x;
|
||||||
|
#else
|
||||||
|
GGML_UNUSED(x);
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)) && __CUDA_ARCH__ >= CC_PASCAL
|
||||||
|
}
|
||||||
|
|
||||||
|
#if CUDART_VERSION < CUDART_HMASK
|
||||||
|
static __device__ __forceinline__ uint32_t __hgt2_mask(const half2 a, const half2 b) {
|
||||||
|
const uint32_t mask_low = 0x0000FFFF * (float( __low2half(a)) > float( __low2half(b)));
|
||||||
|
const uint32_t mask_high = 0xFFFF0000 * (float(__high2half(a)) > float(__high2half(b)));
|
||||||
|
return mask_low | mask_high;
|
||||||
|
}
|
||||||
|
#endif // CUDART_VERSION < CUDART_HMASK
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int ggml_cuda_dp4a(const int a, const int b, int c) {
|
||||||
|
#if defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
#if defined(__gfx906__) || defined(__gfx908__) || defined(__gfx90a__) || defined(RDNA2)
|
||||||
|
c = __builtin_amdgcn_sdot4(a, b, c, false);
|
||||||
|
#elif defined(RDNA3)
|
||||||
|
c = __builtin_amdgcn_sudot4( true, a, true, b, c, false);
|
||||||
|
#elif defined(__gfx1010__) || defined(__gfx900__)
|
||||||
|
int tmp1;
|
||||||
|
int tmp2;
|
||||||
|
asm("\n \
|
||||||
|
v_mul_i32_i24 %1, sext(%3), sext(%4) dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:BYTE_0 \n \
|
||||||
|
v_mul_i32_i24 %2, sext(%3), sext(%4) dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1 src1_sel:BYTE_1 \n \
|
||||||
|
v_add3_u32 %0, %1, %2, %0 \n \
|
||||||
|
v_mul_i32_i24 %1, sext(%3), sext(%4) dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_2 src1_sel:BYTE_2 \n \
|
||||||
|
v_mul_i32_i24 %2, sext(%3), sext(%4) dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_3 src1_sel:BYTE_3 \n \
|
||||||
|
v_add3_u32 %0, %1, %2, %0 \n \
|
||||||
|
"
|
||||||
|
: "+v"(c), "=&v"(tmp1), "=&v"(tmp2)
|
||||||
|
: "v"(a), "v"(b)
|
||||||
|
);
|
||||||
|
#else
|
||||||
|
const int8x4_t va = reinterpret_cast<const int8x4_t&>(a);
|
||||||
|
const int8x4_t vb = reinterpret_cast<const int8x4_t&>(b);
|
||||||
|
c += va[0] * vb[0] + va[1] * vb[1] + va[2] * vb[2] + va[3] * vb[3];
|
||||||
|
#endif
|
||||||
|
return c;
|
||||||
|
|
||||||
|
#else // defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
|
||||||
|
#if __CUDA_ARCH__ >= MIN_CC_DP4A
|
||||||
|
return __dp4a(a, b, c);
|
||||||
|
#else // __CUDA_ARCH__ >= MIN_CC_DP4A
|
||||||
|
const int8_t * a8 = (const int8_t *) &a;
|
||||||
|
const int8_t * b8 = (const int8_t *) &b;
|
||||||
|
return c + a8[0]*b8[0] + a8[1]*b8[1] + a8[2]*b8[2] + a8[3]*b8[3];
|
||||||
|
#endif // __CUDA_ARCH__ >= MIN_CC_DP4A
|
||||||
|
|
||||||
|
#endif // defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: move to ggml-common.h
|
||||||
|
static constexpr __device__ int8_t kvalues_iq4nl[16] = {-127, -104, -83, -65, -49, -35, -22, -10, 1, 13, 25, 38, 53, 69, 89, 113};
|
||||||
|
|
||||||
|
typedef void (*dequantize_kernel_t)(const void * vx, const int64_t ib, const int iqs, dfloat2 & v);
|
||||||
|
|
||||||
|
static __device__ __forceinline__ float get_alibi_slope(
|
||||||
|
const float max_bias, const uint32_t h, const uint32_t n_head_log2, const float m0, const float m1
|
||||||
|
) {
|
||||||
|
if (max_bias <= 0.0f) {
|
||||||
|
return 1.0f;
|
||||||
|
}
|
||||||
|
const float base = h < n_head_log2 ? m0 : m1;
|
||||||
|
const int exph = h < n_head_log2 ? h + 1 : 2*(h - n_head_log2) + 1;
|
||||||
|
|
||||||
|
return powf(base, exph);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <ggml_type type>
|
||||||
|
struct ggml_cuda_type_traits;
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_F16> {
|
||||||
|
static constexpr int qk = 1;
|
||||||
|
static constexpr int qr = 1;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q4_0> {
|
||||||
|
static constexpr int qk = QK4_0;
|
||||||
|
static constexpr int qr = QR4_0;
|
||||||
|
static constexpr int qi = QI4_0;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q4_1> {
|
||||||
|
static constexpr int qk = QK4_1;
|
||||||
|
static constexpr int qr = QR4_1;
|
||||||
|
static constexpr int qi = QI4_1;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q5_0> {
|
||||||
|
static constexpr int qk = QK5_0;
|
||||||
|
static constexpr int qr = QR5_0;
|
||||||
|
static constexpr int qi = QI5_0;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q5_1> {
|
||||||
|
static constexpr int qk = QK5_1;
|
||||||
|
static constexpr int qr = QR5_1;
|
||||||
|
static constexpr int qi = QI5_1;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q8_0> {
|
||||||
|
static constexpr int qk = QK8_0;
|
||||||
|
static constexpr int qr = QR8_0;
|
||||||
|
static constexpr int qi = QI8_0;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q2_K> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR2_K;
|
||||||
|
static constexpr int qi = QI2_K;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q3_K> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR3_K;
|
||||||
|
static constexpr int qi = QI3_K;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q4_K> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR4_K;
|
||||||
|
static constexpr int qi = QI4_K;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q5_K> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR5_K;
|
||||||
|
static constexpr int qi = QI5_K;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_Q6_K> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR6_K;
|
||||||
|
static constexpr int qi = QI6_K;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ2_XXS> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR2_XXS;
|
||||||
|
static constexpr int qi = QI2_XXS;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ2_XS> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR2_XS;
|
||||||
|
static constexpr int qi = QI2_XS;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ2_S> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR2_S;
|
||||||
|
static constexpr int qi = QI2_S;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ3_XXS> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR3_XXS;
|
||||||
|
static constexpr int qi = QI3_XXS;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ1_S> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR1_S;
|
||||||
|
static constexpr int qi = QI1_S;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ1_M> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR1_M;
|
||||||
|
static constexpr int qi = QI1_M;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ4_NL> {
|
||||||
|
static constexpr int qk = QK4_NL;
|
||||||
|
static constexpr int qr = QR4_NL;
|
||||||
|
static constexpr int qi = QI4_NL;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ4_XS> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR4_XS;
|
||||||
|
static constexpr int qi = QI4_XS;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<>
|
||||||
|
struct ggml_cuda_type_traits<GGML_TYPE_IQ3_S> {
|
||||||
|
static constexpr int qk = QK_K;
|
||||||
|
static constexpr int qr = QR3_S;
|
||||||
|
static constexpr int qi = QI3_S;
|
||||||
|
};
|
||||||
|
|
||||||
|
//////////////////////
|
||||||
|
|
||||||
|
struct ggml_cuda_device_info {
|
||||||
|
int device_count;
|
||||||
|
|
||||||
|
struct cuda_device_info {
|
||||||
|
int cc; // compute capability
|
||||||
|
int nsm; // number of streaming multiprocessors
|
||||||
|
size_t smpb; // max. shared memory per block
|
||||||
|
size_t smpbo; // max. shared memory per block (with opt-in)
|
||||||
|
bool vmm; // virtual memory support
|
||||||
|
size_t vmm_granularity; // granularity of virtual memory
|
||||||
|
size_t total_vram;
|
||||||
|
};
|
||||||
|
|
||||||
|
cuda_device_info devices[GGML_CUDA_MAX_DEVICES] = {};
|
||||||
|
|
||||||
|
std::array<float, GGML_CUDA_MAX_DEVICES> default_tensor_split = {};
|
||||||
|
};
|
||||||
|
|
||||||
|
const ggml_cuda_device_info & ggml_cuda_info();
|
||||||
|
|
||||||
|
void ggml_cuda_set_device(int device);
|
||||||
|
int ggml_cuda_get_device();
|
||||||
|
|
||||||
|
struct ggml_cuda_pool {
|
||||||
|
virtual ~ggml_cuda_pool() = default;
|
||||||
|
|
||||||
|
virtual void * alloc(size_t size, size_t * actual_size) = 0;
|
||||||
|
virtual void free(void * ptr, size_t size) = 0;
|
||||||
|
};
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
struct ggml_cuda_pool_alloc {
|
||||||
|
ggml_cuda_pool * pool = nullptr;
|
||||||
|
T * ptr = nullptr;
|
||||||
|
size_t actual_size = 0;
|
||||||
|
|
||||||
|
ggml_cuda_pool_alloc() = default;
|
||||||
|
|
||||||
|
explicit ggml_cuda_pool_alloc(ggml_cuda_pool & pool) : pool(&pool) {
|
||||||
|
}
|
||||||
|
|
||||||
|
ggml_cuda_pool_alloc(ggml_cuda_pool & pool, size_t size) : pool(&pool) {
|
||||||
|
alloc(size);
|
||||||
|
}
|
||||||
|
|
||||||
|
~ggml_cuda_pool_alloc() {
|
||||||
|
if (ptr != nullptr) {
|
||||||
|
pool->free(ptr, actual_size);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// size is in number of elements
|
||||||
|
T * alloc(size_t size) {
|
||||||
|
GGML_ASSERT(pool != nullptr);
|
||||||
|
GGML_ASSERT(ptr == nullptr);
|
||||||
|
ptr = (T *) pool->alloc(size * sizeof(T), &this->actual_size);
|
||||||
|
return ptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
T * alloc(ggml_cuda_pool & pool, size_t size) {
|
||||||
|
this->pool = &pool;
|
||||||
|
return alloc(size);
|
||||||
|
}
|
||||||
|
|
||||||
|
T * get() {
|
||||||
|
return ptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
ggml_cuda_pool_alloc(const ggml_cuda_pool_alloc &) = delete;
|
||||||
|
ggml_cuda_pool_alloc(ggml_cuda_pool_alloc &&) = delete;
|
||||||
|
ggml_cuda_pool_alloc& operator=(const ggml_cuda_pool_alloc &) = delete;
|
||||||
|
ggml_cuda_pool_alloc& operator=(ggml_cuda_pool_alloc &&) = delete;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
// backend interface
|
||||||
|
|
||||||
|
struct ggml_tensor_extra_gpu {
|
||||||
|
void * data_device[GGML_CUDA_MAX_DEVICES]; // 1 pointer for each device for split tensors
|
||||||
|
cudaEvent_t events[GGML_CUDA_MAX_DEVICES][GGML_CUDA_MAX_STREAMS]; // events for synchronizing multiple GPUs
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
#if (CUDART_VERSION >= 12000) && defined(GGML_CUDA_USE_GRAPHS)
|
||||||
|
#define USE_CUDA_GRAPH
|
||||||
|
#endif
|
||||||
|
|
||||||
|
struct ggml_graph_node_properties {
|
||||||
|
void * node_address;
|
||||||
|
ggml_op node_op;
|
||||||
|
int64_t ne[GGML_MAX_DIMS];
|
||||||
|
size_t nb[GGML_MAX_DIMS];
|
||||||
|
void * src_address[GGML_MAX_SRC];
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ggml_cuda_graph {
|
||||||
|
#ifdef USE_CUDA_GRAPH
|
||||||
|
~ggml_cuda_graph() {
|
||||||
|
if (instance != nullptr) {
|
||||||
|
CUDA_CHECK(cudaGraphExecDestroy(instance));
|
||||||
|
}
|
||||||
|
if (graph != nullptr) {
|
||||||
|
CUDA_CHECK(cudaGraphDestroy(graph));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cudaGraph_t graph = nullptr;
|
||||||
|
cudaGraphExec_t instance = nullptr;
|
||||||
|
size_t num_nodes = 0;
|
||||||
|
std::vector<cudaGraphNode_t> nodes;
|
||||||
|
std::vector<cudaKernelNodeParams> params;
|
||||||
|
bool disable_due_to_gpu_arch = false;
|
||||||
|
bool disable_due_to_too_many_updates = false;
|
||||||
|
bool disable_due_to_failed_graph_capture = false;
|
||||||
|
int number_consecutive_updates = 0;
|
||||||
|
std::vector<ggml_graph_node_properties> ggml_graph_properties;
|
||||||
|
std::vector<char **> updated_kernel_arg;
|
||||||
|
#endif
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ggml_backend_cuda_context {
|
||||||
|
int device;
|
||||||
|
std::string name;
|
||||||
|
cudaEvent_t copy_event = nullptr;
|
||||||
|
|
||||||
|
cudaStream_t streams[GGML_CUDA_MAX_DEVICES][GGML_CUDA_MAX_STREAMS] = { { nullptr } };
|
||||||
|
cublasHandle_t cublas_handles[GGML_CUDA_MAX_DEVICES] = {nullptr};
|
||||||
|
|
||||||
|
std::unique_ptr<ggml_cuda_graph> cuda_graph;
|
||||||
|
|
||||||
|
explicit ggml_backend_cuda_context(int device) :
|
||||||
|
device(device),
|
||||||
|
name(GGML_CUDA_NAME + std::to_string(device)) {
|
||||||
|
}
|
||||||
|
|
||||||
|
~ggml_backend_cuda_context() {
|
||||||
|
if (copy_event != nullptr) {
|
||||||
|
CUDA_CHECK(cudaEventDestroy(copy_event));
|
||||||
|
}
|
||||||
|
for (int i = 0; i < GGML_CUDA_MAX_DEVICES; ++i) {
|
||||||
|
for (int j = 0; j < GGML_CUDA_MAX_STREAMS; ++j) {
|
||||||
|
if (streams[i][j] != nullptr) {
|
||||||
|
CUDA_CHECK(cudaStreamDestroy(streams[i][j]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (cublas_handles[i] != nullptr) {
|
||||||
|
CUBLAS_CHECK(cublasDestroy(cublas_handles[i]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cudaStream_t stream(int device, int stream) {
|
||||||
|
if (streams[device][stream] == nullptr) {
|
||||||
|
ggml_cuda_set_device(device);
|
||||||
|
CUDA_CHECK(cudaStreamCreateWithFlags(&streams[device][stream], cudaStreamNonBlocking));
|
||||||
|
}
|
||||||
|
return streams[device][stream];
|
||||||
|
}
|
||||||
|
|
||||||
|
cudaStream_t stream() {
|
||||||
|
return stream(device, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
cublasHandle_t cublas_handle(int device) {
|
||||||
|
if (cublas_handles[device] == nullptr) {
|
||||||
|
ggml_cuda_set_device(device);
|
||||||
|
CUBLAS_CHECK(cublasCreate(&cublas_handles[device]));
|
||||||
|
CUBLAS_CHECK(cublasSetMathMode(cublas_handles[device], CUBLAS_TF32_TENSOR_OP_MATH));
|
||||||
|
}
|
||||||
|
return cublas_handles[device];
|
||||||
|
}
|
||||||
|
|
||||||
|
cublasHandle_t cublas_handle() {
|
||||||
|
return cublas_handle(device);
|
||||||
|
}
|
||||||
|
|
||||||
|
// pool
|
||||||
|
std::unique_ptr<ggml_cuda_pool> pools[GGML_CUDA_MAX_DEVICES];
|
||||||
|
|
||||||
|
static std::unique_ptr<ggml_cuda_pool> new_pool_for_device(int device);
|
||||||
|
|
||||||
|
ggml_cuda_pool & pool(int device) {
|
||||||
|
if (pools[device] == nullptr) {
|
||||||
|
pools[device] = new_pool_for_device(device);
|
||||||
|
}
|
||||||
|
return *pools[device];
|
||||||
|
}
|
||||||
|
|
||||||
|
ggml_cuda_pool & pool() {
|
||||||
|
return pool(device);
|
||||||
|
}
|
||||||
|
};
|
222
llama/ggml-cuda/concat.cu
Normal file
222
llama/ggml-cuda/concat.cu
Normal file
|
@ -0,0 +1,222 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "concat.cuh"
|
||||||
|
|
||||||
|
// contiguous kernels
|
||||||
|
static __global__ void concat_f32_dim0(const float * x, const float * y, float * dst, const int ne0, const int ne00) {
|
||||||
|
int nidx = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (nidx >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
int offset_dst =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
blockIdx.z * ne0 * gridDim.y;
|
||||||
|
|
||||||
|
if (nidx < ne00) { // src0
|
||||||
|
int offset_src =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne00 +
|
||||||
|
blockIdx.z * ne00 * gridDim.y;
|
||||||
|
dst[offset_dst] = x[offset_src];
|
||||||
|
} else {
|
||||||
|
int offset_src =
|
||||||
|
(nidx - ne00) +
|
||||||
|
blockIdx.y * (ne0 - ne00) +
|
||||||
|
blockIdx.z * (ne0 - ne00) * gridDim.y;
|
||||||
|
dst[offset_dst] = y[offset_src];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __global__ void concat_f32_dim1(const float * x, const float * y, float * dst, const int ne0, const int ne01) {
|
||||||
|
int nidx = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (nidx >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
int offset_dst =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
blockIdx.z * ne0 * gridDim.y;
|
||||||
|
|
||||||
|
if (blockIdx.y < ne01) { // src0
|
||||||
|
int offset_src =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
blockIdx.z * ne0 * ne01;
|
||||||
|
dst[offset_dst] = x[offset_src];
|
||||||
|
} else {
|
||||||
|
int offset_src =
|
||||||
|
nidx +
|
||||||
|
(blockIdx.y - ne01) * ne0 +
|
||||||
|
blockIdx.z * ne0 * (gridDim.y - ne01);
|
||||||
|
dst[offset_dst] = y[offset_src];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __global__ void concat_f32_dim2(const float * x, const float * y, float * dst, const int ne0, const int ne02) {
|
||||||
|
int nidx = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (nidx >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
int offset_dst =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
blockIdx.z * ne0 * gridDim.y;
|
||||||
|
|
||||||
|
if (blockIdx.z < ne02) { // src0
|
||||||
|
int offset_src =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
blockIdx.z * ne0 * gridDim.y;
|
||||||
|
dst[offset_dst] = x[offset_src];
|
||||||
|
} else {
|
||||||
|
int offset_src =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
(blockIdx.z - ne02) * ne0 * gridDim.y;
|
||||||
|
dst[offset_dst] = y[offset_src];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void concat_f32_cuda(const float * x, const float * y, float * dst, int ne00, int ne01, int ne02, int ne0, int ne1, int ne2, int dim, cudaStream_t stream) {
|
||||||
|
int num_blocks = (ne0 + CUDA_CONCAT_BLOCK_SIZE - 1) / CUDA_CONCAT_BLOCK_SIZE;
|
||||||
|
dim3 gridDim(num_blocks, ne1, ne2);
|
||||||
|
if (dim == 0) {
|
||||||
|
concat_f32_dim0<<<gridDim, CUDA_CONCAT_BLOCK_SIZE, 0, stream>>>(x, y, dst, ne0, ne00);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (dim == 1) {
|
||||||
|
concat_f32_dim1<<<gridDim, CUDA_CONCAT_BLOCK_SIZE, 0, stream>>>(x, y, dst, ne0, ne01);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
concat_f32_dim2<<<gridDim, CUDA_CONCAT_BLOCK_SIZE, 0, stream>>>(x, y, dst, ne0, ne02);
|
||||||
|
}
|
||||||
|
|
||||||
|
// non-contiguous kernel (slow)
|
||||||
|
static __global__ void concat_f32_non_cont(
|
||||||
|
const char * src0,
|
||||||
|
const char * src1,
|
||||||
|
char * dst,
|
||||||
|
int64_t ne00,
|
||||||
|
int64_t ne01,
|
||||||
|
int64_t ne02,
|
||||||
|
int64_t ne03,
|
||||||
|
uint64_t nb00,
|
||||||
|
uint64_t nb01,
|
||||||
|
uint64_t nb02,
|
||||||
|
uint64_t nb03,
|
||||||
|
int64_t /*ne10*/,
|
||||||
|
int64_t /*ne11*/,
|
||||||
|
int64_t /*ne12*/,
|
||||||
|
int64_t /*ne13*/,
|
||||||
|
uint64_t nb10,
|
||||||
|
uint64_t nb11,
|
||||||
|
uint64_t nb12,
|
||||||
|
uint64_t nb13,
|
||||||
|
int64_t ne0,
|
||||||
|
int64_t /*ne1*/,
|
||||||
|
int64_t /*ne2*/,
|
||||||
|
int64_t /*ne3*/,
|
||||||
|
uint64_t nb0,
|
||||||
|
uint64_t nb1,
|
||||||
|
uint64_t nb2,
|
||||||
|
uint64_t nb3,
|
||||||
|
int32_t dim) {
|
||||||
|
const int64_t i3 = blockIdx.z;
|
||||||
|
const int64_t i2 = blockIdx.y;
|
||||||
|
const int64_t i1 = blockIdx.x;
|
||||||
|
|
||||||
|
int64_t o[4] = {0, 0, 0, 0};
|
||||||
|
o[dim] = dim == 0 ? ne00 : (dim == 1 ? ne01 : (dim == 2 ? ne02 : ne03));
|
||||||
|
|
||||||
|
const float * x;
|
||||||
|
|
||||||
|
for (int i0 = threadIdx.x; i0 < ne0; i0 += blockDim.x) {
|
||||||
|
if (i0 < ne00 && i1 < ne01 && i2 < ne02 && i3 < ne03) {
|
||||||
|
x = (const float *)(src0 + (i3 )*nb03 + (i2 )*nb02 + (i1 )*nb01 + (i0 )*nb00);
|
||||||
|
} else {
|
||||||
|
x = (const float *)(src1 + (i3 - o[3])*nb13 + (i2 - o[2])*nb12 + (i1 - o[1])*nb11 + (i0 - o[0])*nb10);
|
||||||
|
}
|
||||||
|
|
||||||
|
float * y = (float *)(dst + i3*nb3 + i2*nb2 + i1*nb1 + i0*nb0);
|
||||||
|
|
||||||
|
*y = *x;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void ggml_cuda_op_concat(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
const int32_t dim = ((int32_t *) dst->op_params)[0];
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
if (ggml_is_contiguous(src0) && ggml_is_contiguous(src1)) {
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
const float * src1_d = (const float *)src1->data;
|
||||||
|
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
|
||||||
|
if (dim != 3) {
|
||||||
|
for (int i3 = 0; i3 < dst->ne[3]; i3++) {
|
||||||
|
concat_f32_cuda(
|
||||||
|
src0_d + i3 * (src0->nb[3] / 4),
|
||||||
|
src1_d + i3 * (src1->nb[3] / 4),
|
||||||
|
dst_d + i3 * ( dst->nb[3] / 4),
|
||||||
|
src0->ne[0], src0->ne[1], src0->ne[2],
|
||||||
|
dst->ne[0], dst->ne[1], dst->ne[2], dim, stream);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
const size_t size0 = ggml_nbytes(src0);
|
||||||
|
const size_t size1 = ggml_nbytes(src1);
|
||||||
|
|
||||||
|
CUDA_CHECK(cudaMemcpyAsync(dst_d, src0_d, size0, cudaMemcpyDeviceToDevice, stream));
|
||||||
|
CUDA_CHECK(cudaMemcpyAsync(dst_d + size0/4, src1_d, size1, cudaMemcpyDeviceToDevice, stream));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
dim3 grid_dim(dst->ne[1], dst->ne[2], dst->ne[3]);
|
||||||
|
concat_f32_non_cont<<<grid_dim, CUDA_CONCAT_BLOCK_SIZE, 0, stream>>>(
|
||||||
|
(const char *)src0->data,
|
||||||
|
(const char *)src1->data,
|
||||||
|
( char *)dst->data,
|
||||||
|
src0->ne[0], src0->ne[1], src0->ne[2], src0->ne[3],
|
||||||
|
src0->nb[0], src0->nb[1], src0->nb[2], src0->nb[3],
|
||||||
|
src1->ne[0], src1->ne[1], src1->ne[2], src1->ne[3],
|
||||||
|
src1->nb[0], src1->nb[1], src1->nb[2], src1->nb[3],
|
||||||
|
dst->ne[0], dst->ne[1], dst->ne[2], dst->ne[3],
|
||||||
|
dst->nb[0], dst->nb[1], dst->nb[2], dst->nb[3], dim);
|
||||||
|
}
|
||||||
|
}
|
31
llama/ggml-cuda/concat.cuh
Normal file
31
llama/ggml-cuda/concat.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_CONCAT_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_concat(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
113
llama/ggml-cuda/conv-transpose-1d.cu
Normal file
113
llama/ggml-cuda/conv-transpose-1d.cu
Normal file
|
@ -0,0 +1,113 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "conv-transpose-1d.cuh"
|
||||||
|
|
||||||
|
static __global__ void conv_transpose_1d_kernel(
|
||||||
|
const int s0, const int p0, const int d0, const int output_size,
|
||||||
|
const int src0_ne0, const int src0_ne1, const int src0_ne2, const int src0_ne3,
|
||||||
|
const int src1_ne0, const int src1_ne1, const int src1_ne2, const int src1_ne3,
|
||||||
|
const int dst_ne0, const int dst_ne1, const int dst_ne2, const int dst_ne3,
|
||||||
|
const float * src0, const float * src1, float * dst) {
|
||||||
|
int global_index = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (global_index >= output_size) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
int out_index = global_index / dst_ne0;
|
||||||
|
|
||||||
|
float accumulator = 0;
|
||||||
|
|
||||||
|
for (int c = 0; c < src0_ne2; c++) {
|
||||||
|
int idx = global_index % dst_ne0;
|
||||||
|
|
||||||
|
int kernel_offset = (src0_ne0 * src0_ne1 * c) + (out_index * src0_ne0);
|
||||||
|
int input_offset = src1_ne0 * c;
|
||||||
|
|
||||||
|
for (int i = 0; i < src1_ne0; i++) {
|
||||||
|
if (!(idx >= i*s0 && idx < i*s0 + src0_ne0)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
int weight_idx = idx - i*s0;
|
||||||
|
|
||||||
|
float kernel_weight = src0[kernel_offset + weight_idx];
|
||||||
|
float input_value = src1[input_offset+i];
|
||||||
|
|
||||||
|
accumulator += kernel_weight * input_value;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
dst[global_index] = accumulator;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void conv_transpose_1d_f32_f32_cuda(
|
||||||
|
const int s0, const int p0, const int d0, const int output_size,
|
||||||
|
const int src0_ne0, const int src0_ne1, const int src0_ne2, const int src0_ne3,
|
||||||
|
const int src1_ne0, const int src1_ne1, const int src1_ne2, const int src1_ne3,
|
||||||
|
const int dst_ne0, const int dst_ne1, const int dst_ne2, const int dst_ne3,
|
||||||
|
const float * src0, const float * src1, float * dst,
|
||||||
|
cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int num_blocks = (output_size + CUDA_CONV_TRANPOSE_1D_BLOCK_SIZE - 1) / CUDA_CONV_TRANPOSE_1D_BLOCK_SIZE;
|
||||||
|
conv_transpose_1d_kernel<<<num_blocks,CUDA_CONV_TRANPOSE_1D_BLOCK_SIZE, 0, stream>>>(
|
||||||
|
s0,p0,d0,output_size,
|
||||||
|
src0_ne0, src0_ne1, src0_ne2, src0_ne3,
|
||||||
|
src1_ne0, src1_ne1, src1_ne2, src1_ne3,
|
||||||
|
dst_ne0, dst_ne1, dst_ne2, dst_ne3,
|
||||||
|
src0,src1, dst);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_conv_transpose_1d(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
const float * src1_d = (const float *)src1->data;
|
||||||
|
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src1));
|
||||||
|
|
||||||
|
const int32_t * opts = (const int32_t *)dst->op_params;
|
||||||
|
|
||||||
|
const int s0 = opts[0];
|
||||||
|
const int p0 = 0;//opts[3];
|
||||||
|
const int d0 = 1;//opts[4];
|
||||||
|
|
||||||
|
const int64_t kernel_size = ggml_nelements(src0);
|
||||||
|
const int64_t input_size = ggml_nelements(src1);
|
||||||
|
const int64_t output_size = ggml_nelements(dst);
|
||||||
|
|
||||||
|
conv_transpose_1d_f32_f32_cuda(s0, p0, d0, output_size,
|
||||||
|
src0->ne[0], src0->ne[1], src0->ne[2], src0->ne[3],
|
||||||
|
src1->ne[0], src1->ne[1], src1->ne[2], src1->ne[3],
|
||||||
|
dst->ne[0], dst->ne[1], dst->ne[2], dst->ne[3],
|
||||||
|
src0_d, src1_d, dst_d, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/conv-transpose-1d.cuh
Normal file
31
llama/ggml-cuda/conv-transpose-1d.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_CONV_TRANPOSE_1D_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_conv_transpose_1d(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
712
llama/ggml-cuda/convert.cu
Normal file
712
llama/ggml-cuda/convert.cu
Normal file
|
@ -0,0 +1,712 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "convert.cuh"
|
||||||
|
#include "dequantize.cuh"
|
||||||
|
|
||||||
|
#define CUDA_Q8_0_NE_ALIGN 2048
|
||||||
|
|
||||||
|
template <int qk, int qr, dequantize_kernel_t dequantize_kernel, typename dst_t>
|
||||||
|
static __global__ void dequantize_block(const void * __restrict__ vx, dst_t * __restrict__ y, const int64_t k) {
|
||||||
|
const int64_t i = (int64_t)2*(blockDim.x*blockIdx.x + threadIdx.x);
|
||||||
|
|
||||||
|
if (i >= k) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int64_t ib = i/qk; // block index
|
||||||
|
const int64_t iqs = (i%qk)/qr; // quant index
|
||||||
|
const int64_t iybs = i - i%qk; // y block start index
|
||||||
|
const int64_t y_offset = qr == 1 ? 1 : qk/2;
|
||||||
|
|
||||||
|
// dequantize
|
||||||
|
dfloat2 v;
|
||||||
|
dequantize_kernel(vx, ib, iqs, v);
|
||||||
|
|
||||||
|
y[iybs + iqs + 0] = v.x;
|
||||||
|
y[iybs + iqs + y_offset] = v.y;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <bool need_check>
|
||||||
|
static __global__ void dequantize_block_q8_0_f16(const void * __restrict__ vx, half * __restrict__ y, const int64_t k) {
|
||||||
|
#if __CUDA_ARCH__ >= CC_PASCAL
|
||||||
|
constexpr int nint = CUDA_Q8_0_NE_ALIGN/sizeof(int) + WARP_SIZE;
|
||||||
|
|
||||||
|
const int64_t i0 = CUDA_Q8_0_NE_ALIGN*blockIdx.x;
|
||||||
|
const int * x0 = ((int *) vx) + blockIdx.x * nint;
|
||||||
|
half2 * y2 = (half2 *) (y + i0);
|
||||||
|
|
||||||
|
__shared__ int vals[nint];
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int ix0 = 0; ix0 < nint; ix0 += WARP_SIZE) {
|
||||||
|
if (need_check && i0*sizeof(block_q8_0)/QK8_0 + sizeof(int)*(ix0 + threadIdx.x) >= k*sizeof(block_q8_0)/QK8_0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int ix = ix0 + threadIdx.x;
|
||||||
|
vals[ix] = x0[ix];
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int iy = 0; iy < CUDA_Q8_0_NE_ALIGN; iy += 2*WARP_SIZE) {
|
||||||
|
if (need_check && i0 + iy + 2*threadIdx.x >= k) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const half * b0 = ((const half *) vals) + (sizeof(block_q8_0)/sizeof(half)) * ((iy + 2*threadIdx.x)/QK8_0);
|
||||||
|
const half d = *b0;
|
||||||
|
const char2 qs = ((const char2 *) (b0 + 1))[threadIdx.x % (QK8_0/2)];
|
||||||
|
|
||||||
|
y2[iy/2 + threadIdx.x] = __hmul2(make_half2(qs.x, qs.y), __half2half2(d));
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
GGML_UNUSED(vx);
|
||||||
|
GGML_UNUSED(y);
|
||||||
|
GGML_UNUSED(k);
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // __CUDA_ARCH__ >= CC_PASCAL
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q4_0(const void * __restrict__ vx, dst_t * __restrict__ yy, int nb32) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
|
||||||
|
// assume 32 threads
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8;
|
||||||
|
const int64_t ir = tid%8;
|
||||||
|
const int64_t ib = 8*i + ir;
|
||||||
|
if (ib >= nb32) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
dst_t * y = yy + 256*i + 32*ir + 4*il;
|
||||||
|
|
||||||
|
const block_q4_0 * x = (const block_q4_0 *)vx + ib;
|
||||||
|
const float d = __half2float(x->d);
|
||||||
|
const float dm = -8*d;
|
||||||
|
|
||||||
|
const uint8_t * q = x->qs + 4*il;
|
||||||
|
|
||||||
|
for (int l = 0; l < 4; ++l) {
|
||||||
|
y[l+ 0] = d * (q[l] & 0xF) + dm;
|
||||||
|
y[l+16] = d * (q[l] >> 4) + dm;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q4_1(const void * __restrict__ vx, dst_t * __restrict__ yy, int nb32) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
|
||||||
|
// assume 32 threads
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8;
|
||||||
|
const int64_t ir = tid%8;
|
||||||
|
const int64_t ib = 8*i + ir;
|
||||||
|
if (ib >= nb32) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
dst_t * y = yy + 256*i + 32*ir + 4*il;
|
||||||
|
|
||||||
|
const block_q4_1 * x = (const block_q4_1 *)vx + ib;
|
||||||
|
const float2 d = __half22float2(x->dm);
|
||||||
|
|
||||||
|
const uint8_t * q = x->qs + 4*il;
|
||||||
|
|
||||||
|
for (int l = 0; l < 4; ++l) {
|
||||||
|
y[l+ 0] = d.x * (q[l] & 0xF) + d.y;
|
||||||
|
y[l+16] = d.x * (q[l] >> 4) + d.y;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//================================== k-quants
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q2_K(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_q2_K * x = (const block_q2_K *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t n = tid/32;
|
||||||
|
const int64_t l = tid - 32*n;
|
||||||
|
const int64_t is = 8*n + l/16;
|
||||||
|
|
||||||
|
const uint8_t q = x[i].qs[32*n + l];
|
||||||
|
dst_t * y = yy + i*QK_K + 128*n;
|
||||||
|
|
||||||
|
float dall = __low2half(x[i].dm);
|
||||||
|
float dmin = __high2half(x[i].dm);
|
||||||
|
y[l+ 0] = dall * (x[i].scales[is+0] & 0xF) * ((q >> 0) & 3) - dmin * (x[i].scales[is+0] >> 4);
|
||||||
|
y[l+32] = dall * (x[i].scales[is+2] & 0xF) * ((q >> 2) & 3) - dmin * (x[i].scales[is+2] >> 4);
|
||||||
|
y[l+64] = dall * (x[i].scales[is+4] & 0xF) * ((q >> 4) & 3) - dmin * (x[i].scales[is+4] >> 4);
|
||||||
|
y[l+96] = dall * (x[i].scales[is+6] & 0xF) * ((q >> 6) & 3) - dmin * (x[i].scales[is+6] >> 4);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q3_K(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_q3_K * x = (const block_q3_K *) vx;
|
||||||
|
|
||||||
|
const int64_t r = threadIdx.x/4;
|
||||||
|
const int64_t tid = r/2;
|
||||||
|
const int64_t is0 = r%2;
|
||||||
|
const int64_t l0 = 16*is0 + 4*(threadIdx.x%4);
|
||||||
|
const int64_t n = tid / 4;
|
||||||
|
const int64_t j = tid - 4*n;
|
||||||
|
|
||||||
|
uint8_t m = 1 << (4*n + j);
|
||||||
|
int64_t is = 8*n + 2*j + is0;
|
||||||
|
int shift = 2*j;
|
||||||
|
|
||||||
|
int8_t us = is < 4 ? (x[i].scales[is-0] & 0xF) | (((x[i].scales[is+8] >> 0) & 3) << 4) :
|
||||||
|
is < 8 ? (x[i].scales[is-0] & 0xF) | (((x[i].scales[is+4] >> 2) & 3) << 4) :
|
||||||
|
is < 12 ? (x[i].scales[is-8] >> 4) | (((x[i].scales[is+0] >> 4) & 3) << 4) :
|
||||||
|
(x[i].scales[is-8] >> 4) | (((x[i].scales[is-4] >> 6) & 3) << 4);
|
||||||
|
float d_all = x[i].d;
|
||||||
|
float dl = d_all * (us - 32);
|
||||||
|
|
||||||
|
dst_t * y = yy + i*QK_K + 128*n + 32*j;
|
||||||
|
const uint8_t * q = x[i].qs + 32*n;
|
||||||
|
const uint8_t * hm = x[i].hmask;
|
||||||
|
|
||||||
|
for (int l = l0; l < l0+4; ++l) y[l] = dl * ((int8_t)((q[l] >> shift) & 3) - ((hm[l] & m) ? 0 : 4));
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline __device__ void get_scale_min_k4(int j, const uint8_t * q, uint8_t & d, uint8_t & m) {
|
||||||
|
if (j < 4) {
|
||||||
|
d = q[j] & 63; m = q[j + 4] & 63;
|
||||||
|
} else {
|
||||||
|
d = (q[j+4] & 0xF) | ((q[j-4] >> 6) << 4);
|
||||||
|
m = (q[j+4] >> 4) | ((q[j-0] >> 6) << 4);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q4_K(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
const block_q4_K * x = (const block_q4_K *) vx;
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
|
||||||
|
// assume 32 threads
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8;
|
||||||
|
const int64_t ir = tid%8;
|
||||||
|
const int64_t is = 2*il;
|
||||||
|
const int64_t n = 4;
|
||||||
|
|
||||||
|
dst_t * y = yy + i*QK_K + 64*il + n*ir;
|
||||||
|
|
||||||
|
const float dall = __low2half(x[i].dm);
|
||||||
|
const float dmin = __high2half(x[i].dm);
|
||||||
|
|
||||||
|
const uint8_t * q = x[i].qs + 32*il + n*ir;
|
||||||
|
|
||||||
|
uint8_t sc, m;
|
||||||
|
get_scale_min_k4(is + 0, x[i].scales, sc, m);
|
||||||
|
const float d1 = dall * sc; const float m1 = dmin * m;
|
||||||
|
get_scale_min_k4(is + 1, x[i].scales, sc, m);
|
||||||
|
const float d2 = dall * sc; const float m2 = dmin * m;
|
||||||
|
for (int l = 0; l < n; ++l) {
|
||||||
|
y[l + 0] = d1 * (q[l] & 0xF) - m1;
|
||||||
|
y[l +32] = d2 * (q[l] >> 4) - m2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q5_K(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
const block_q5_K * x = (const block_q5_K *) vx;
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
|
||||||
|
// assume 64 threads - this is very slightly better than the one below
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/16; // il is in 0...3
|
||||||
|
const int64_t ir = tid%16; // ir is in 0...15
|
||||||
|
const int64_t is = 2*il; // is is in 0...6
|
||||||
|
|
||||||
|
dst_t * y = yy + i*QK_K + 64*il + 2*ir;
|
||||||
|
|
||||||
|
const float dall = __low2half(x[i].dm);
|
||||||
|
const float dmin = __high2half(x[i].dm);
|
||||||
|
|
||||||
|
const uint8_t * ql = x[i].qs + 32*il + 2*ir;
|
||||||
|
const uint8_t * qh = x[i].qh + 2*ir;
|
||||||
|
|
||||||
|
uint8_t sc, m;
|
||||||
|
get_scale_min_k4(is + 0, x[i].scales, sc, m);
|
||||||
|
const float d1 = dall * sc; const float m1 = dmin * m;
|
||||||
|
get_scale_min_k4(is + 1, x[i].scales, sc, m);
|
||||||
|
const float d2 = dall * sc; const float m2 = dmin * m;
|
||||||
|
|
||||||
|
uint8_t hm = 1 << (2*il);
|
||||||
|
y[ 0] = d1 * ((ql[ 0] & 0xF) + (qh[ 0] & hm ? 16 : 0)) - m1;
|
||||||
|
y[ 1] = d1 * ((ql[ 1] & 0xF) + (qh[ 1] & hm ? 16 : 0)) - m1;
|
||||||
|
hm <<= 1;
|
||||||
|
y[32] = d2 * ((ql[ 0] >> 4) + (qh[ 0] & hm ? 16 : 0)) - m2;
|
||||||
|
y[33] = d2 * ((ql[ 1] >> 4) + (qh[ 1] & hm ? 16 : 0)) - m2;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_q6_K(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
const block_q6_K * x = (const block_q6_K *) vx;
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
|
||||||
|
// assume 64 threads - this is very slightly better than the one below
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t ip = tid/32; // ip is 0 or 1
|
||||||
|
const int64_t il = tid - 32*ip; // 0...32
|
||||||
|
const int64_t is = 8*ip + il/16;
|
||||||
|
|
||||||
|
dst_t * y = yy + i*QK_K + 128*ip + il;
|
||||||
|
|
||||||
|
const float d = x[i].d;
|
||||||
|
|
||||||
|
const uint8_t * ql = x[i].ql + 64*ip + il;
|
||||||
|
const uint8_t qh = x[i].qh[32*ip + il];
|
||||||
|
const int8_t * sc = x[i].scales + is;
|
||||||
|
|
||||||
|
y[ 0] = d * sc[0] * ((int8_t)((ql[ 0] & 0xF) | (((qh >> 0) & 3) << 4)) - 32);
|
||||||
|
y[32] = d * sc[2] * ((int8_t)((ql[32] & 0xF) | (((qh >> 2) & 3) << 4)) - 32);
|
||||||
|
y[64] = d * sc[4] * ((int8_t)((ql[ 0] >> 4) | (((qh >> 4) & 3) << 4)) - 32);
|
||||||
|
y[96] = d * sc[6] * ((int8_t)((ql[32] >> 4) | (((qh >> 6) & 3) << 4)) - 32);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq2_xxs(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq2_xxs * x = (const block_iq2_xxs *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const uint16_t * q2 = x[i].qs + 4*ib;
|
||||||
|
const uint8_t * aux8 = (const uint8_t *)q2;
|
||||||
|
const uint8_t * grid = (const uint8_t *)(iq2xxs_grid + aux8[il]);
|
||||||
|
const uint32_t aux32 = q2[2] | (q2[3] << 16);
|
||||||
|
const float d = (float)x[i].d * (0.5f + (aux32 >> 28)) * 0.25f;
|
||||||
|
const uint8_t signs = ksigns_iq2xs[(aux32 >> 7*il) & 127];
|
||||||
|
for (int j = 0; j < 8; ++j) y[j] = d * grid[j] * (signs & kmask_iq2xs[j] ? -1.f : 1.f);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq2_xs(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq2_xs * x = (const block_iq2_xs *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const uint16_t * q2 = x[i].qs + 4*ib;
|
||||||
|
const uint8_t * grid = (const uint8_t *)(iq2xs_grid + (q2[il] & 511));
|
||||||
|
const float d = (float)x[i].d * (0.5f + ((x[i].scales[ib] >> 4*(il/2)) & 0xf)) * 0.25f;
|
||||||
|
const uint8_t signs = ksigns_iq2xs[q2[il] >> 9];
|
||||||
|
for (int j = 0; j < 8; ++j) y[j] = d * grid[j] * (signs & kmask_iq2xs[j] ? -1.f : 1.f);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq2_s(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq2_s * x = (const block_iq2_s *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const uint8_t * grid = (const uint8_t *)(iq2s_grid + (x[i].qs[4*ib+il] | ((x[i].qh[ib] << (8-2*il)) & 0x300)));
|
||||||
|
const float d = (float)x[i].d * (0.5f + ((x[i].scales[ib] >> 4*(il/2)) & 0xf)) * 0.25f;
|
||||||
|
const uint8_t signs = x[i].qs[QK_K/8+4*ib+il];
|
||||||
|
for (int j = 0; j < 8; ++j) y[j] = d * grid[j] * (signs & kmask_iq2xs[j] ? -1.f : 1.f);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq3_xxs(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq3_xxs * x = (const block_iq3_xxs *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const uint8_t * q3 = x[i].qs + 8*ib;
|
||||||
|
const uint16_t * gas = (const uint16_t *)(x[i].qs + QK_K/4) + 2*ib;
|
||||||
|
const uint8_t * grid1 = (const uint8_t *)(iq3xxs_grid + q3[2*il+0]);
|
||||||
|
const uint8_t * grid2 = (const uint8_t *)(iq3xxs_grid + q3[2*il+1]);
|
||||||
|
const uint32_t aux32 = gas[0] | (gas[1] << 16);
|
||||||
|
const float d = (float)x[i].d * (0.5f + (aux32 >> 28)) * 0.5f;
|
||||||
|
const uint8_t signs = ksigns_iq2xs[(aux32 >> 7*il) & 127];
|
||||||
|
for (int j = 0; j < 4; ++j) {
|
||||||
|
y[j+0] = d * grid1[j] * (signs & kmask_iq2xs[j+0] ? -1.f : 1.f);
|
||||||
|
y[j+4] = d * grid2[j] * (signs & kmask_iq2xs[j+4] ? -1.f : 1.f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq3_s(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq3_s * x = (const block_iq3_s *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const uint8_t * qs = x[i].qs + 8*ib;
|
||||||
|
const uint8_t * grid1 = (const uint8_t *)(iq3s_grid + (qs[2*il+0] | ((x[i].qh[ib] << (8-2*il)) & 256)));
|
||||||
|
const uint8_t * grid2 = (const uint8_t *)(iq3s_grid + (qs[2*il+1] | ((x[i].qh[ib] << (7-2*il)) & 256)));
|
||||||
|
const float d = (float)x[i].d * (1 + 2*((x[i].scales[ib/2] >> 4*(ib%2)) & 0xf));
|
||||||
|
const uint8_t signs = x[i].signs[4*ib + il];
|
||||||
|
for (int j = 0; j < 4; ++j) {
|
||||||
|
y[j+0] = d * grid1[j] * (signs & kmask_iq2xs[j+0] ? -1.f : 1.f);
|
||||||
|
y[j+4] = d * grid2[j] * (signs & kmask_iq2xs[j+4] ? -1.f : 1.f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq1_s(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq1_s * x = (const block_iq1_s *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const float delta = x[i].qh[ib] & 0x8000 ? -1 - IQ1S_DELTA : -1 + IQ1S_DELTA;
|
||||||
|
const float d = (float)x[i].d * (2*((x[i].qh[ib] >> 12) & 7) + 1);
|
||||||
|
uint32_t grid32[2]; const int8_t * q = (const int8_t *)grid32;
|
||||||
|
grid32[0] = iq1s_grid_gpu[x[i].qs[4*ib+il] | (((x[i].qh[ib] >> 3*il) & 7) << 8)];
|
||||||
|
grid32[1] = (grid32[0] >> 4) & 0x0f0f0f0f;
|
||||||
|
grid32[0] &= 0x0f0f0f0f;
|
||||||
|
for (int j = 0; j < 8; ++j) {
|
||||||
|
y[j] = d * (q[j] + delta);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq1_m(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq1_m * x = (const block_iq1_m *) vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 8*il;
|
||||||
|
const uint16_t * sc = (const uint16_t *)x[i].scales;
|
||||||
|
iq1m_scale_t scale;
|
||||||
|
scale.u16 = (sc[0] >> 12) | ((sc[1] >> 8) & 0x00f0) | ((sc[2] >> 4) & 0x0f00) | (sc[3] & 0xf000);
|
||||||
|
const int64_t ib16 = 2*ib + il/2; // sc[ib16/4] >> 3*(ib16%4) -> sc[ib/2] >> 3*((2*ib+il/2)%4);
|
||||||
|
const float d = (float)scale.f16 * (2*((sc[ib16/4] >> 3*(ib16%4)) & 0x7) + 1);
|
||||||
|
const float delta = x[i].qh[2*ib+il/2] & (0x08 << 4*(il%2)) ? -1 - IQ1M_DELTA : -1 + IQ1M_DELTA;
|
||||||
|
uint32_t grid32[2]; const int8_t * q = (const int8_t *)grid32;
|
||||||
|
grid32[0] = iq1s_grid_gpu[x[i].qs[4*ib+il] | (((x[i].qh[2*ib+il/2] >> 4*(il%2)) & 7) << 8)];
|
||||||
|
grid32[1] = (grid32[0] >> 4) & 0x0f0f0f0f;
|
||||||
|
grid32[0] &= 0x0f0f0f0f;
|
||||||
|
for (int j = 0; j < 8; ++j) {
|
||||||
|
y[j] = d * (q[j] + delta);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq4_nl(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq4_nl * x = (const block_iq4_nl *) vx + i*(QK_K/QK4_NL);
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 4*il;
|
||||||
|
const uint8_t * q4 = x[ib].qs + 4*il;
|
||||||
|
const float d = (float)x[ib].d;
|
||||||
|
for (int j = 0; j < 4; ++j) {
|
||||||
|
y[j+ 0] = d * kvalues_iq4nl[q4[j] & 0xf];
|
||||||
|
y[j+16] = d * kvalues_iq4nl[q4[j] >> 4];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static __global__ void dequantize_block_iq4_xs(const void * __restrict__ vx, dst_t * __restrict__ yy) {
|
||||||
|
const int64_t i = blockIdx.x;
|
||||||
|
const block_iq4_xs * x = (const block_iq4_xs *)vx;
|
||||||
|
|
||||||
|
const int64_t tid = threadIdx.x;
|
||||||
|
const int64_t il = tid/8; // 0...3
|
||||||
|
const int64_t ib = tid%8; // 0...7
|
||||||
|
dst_t * y = yy + i*QK_K + 32*ib + 4*il;
|
||||||
|
const uint8_t * q4 = x[i].qs + 16*ib + 4*il;
|
||||||
|
const float d = (float)x[i].d * ((((x[i].scales_l[ib/2] >> 4*(ib%2)) & 0xf) | (((x[i].scales_h >> 2*ib) & 3) << 4)) - 32);
|
||||||
|
for (int j = 0; j < 4; ++j) {
|
||||||
|
y[j+ 0] = d * kvalues_iq4nl[q4[j] & 0xf];
|
||||||
|
y[j+16] = d * kvalues_iq4nl[q4[j] >> 4];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int qk, int qr, dequantize_kernel_t dequantize_kernel, typename dst_t>
|
||||||
|
static void dequantize_block_cuda(const void * __restrict__ vx, dst_t * __restrict__ y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int num_blocks = (k + 2*CUDA_DEQUANTIZE_BLOCK_SIZE - 1) / (2*CUDA_DEQUANTIZE_BLOCK_SIZE);
|
||||||
|
dequantize_block<qk, qr, dequantize_kernel><<<num_blocks, CUDA_DEQUANTIZE_BLOCK_SIZE, 0, stream>>>(vx, y, k);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_block_q8_0_f16_cuda(const void * __restrict__ vx, half * __restrict__ y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int num_blocks = (k + CUDA_Q8_0_NE_ALIGN - 1) / CUDA_Q8_0_NE_ALIGN;
|
||||||
|
if (k % CUDA_Q8_0_NE_ALIGN == 0) {
|
||||||
|
const bool need_check = false;
|
||||||
|
dequantize_block_q8_0_f16<need_check><<<num_blocks, WARP_SIZE, 0, stream>>>(vx, y, k);
|
||||||
|
} else {
|
||||||
|
const bool need_check = true;
|
||||||
|
dequantize_block_q8_0_f16<need_check><<<num_blocks, WARP_SIZE, 0, stream>>>(vx, y, k);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q2_K_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_q2_K<<<nb, 64, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q3_K_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_q3_K<<<nb, 64, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q4_0_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb32 = k / 32;
|
||||||
|
const int nb = (k + 255) / 256;
|
||||||
|
dequantize_block_q4_0<<<nb, 32, 0, stream>>>(vx, y, nb32);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q4_1_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb32 = k / 32;
|
||||||
|
const int nb = (k + 255) / 256;
|
||||||
|
dequantize_block_q4_1<<<nb, 32, 0, stream>>>(vx, y, nb32);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q4_K_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_q4_K<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q5_K_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_q5_K<<<nb, 64, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_q6_K_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_q6_K<<<nb, 64, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq2_xxs_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq2_xxs<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq2_xs_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq2_xs<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq2_s_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq2_s<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq3_xxs_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq3_xxs<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq3_s_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq3_s<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq1_s_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq1_s<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq4_nl_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = (k + QK_K - 1) / QK_K;
|
||||||
|
dequantize_block_iq4_nl<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq1_m_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = k / QK_K;
|
||||||
|
dequantize_block_iq1_m<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename dst_t>
|
||||||
|
static void dequantize_row_iq4_xs_cuda(const void * vx, dst_t * y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int nb = (k + QK_K - 1) / QK_K;
|
||||||
|
dequantize_block_iq4_xs<<<nb, 32, 0, stream>>>(vx, y);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename src_t, typename dst_t>
|
||||||
|
static __global__ void convert_unary(const void * __restrict__ vx, dst_t * __restrict__ y, const int64_t k) {
|
||||||
|
const int64_t i = (int64_t)blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (i >= k) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const src_t * x = (src_t *) vx;
|
||||||
|
|
||||||
|
y[i] = x[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename src_t, typename dst_t>
|
||||||
|
static void convert_unary_cuda(const void * __restrict__ vx, dst_t * __restrict__ y, const int64_t k, cudaStream_t stream) {
|
||||||
|
const int num_blocks = (k + CUDA_DEQUANTIZE_BLOCK_SIZE - 1) / CUDA_DEQUANTIZE_BLOCK_SIZE;
|
||||||
|
convert_unary<src_t><<<num_blocks, CUDA_DEQUANTIZE_BLOCK_SIZE, 0, stream>>>(vx, y, k);
|
||||||
|
}
|
||||||
|
|
||||||
|
to_fp16_cuda_t ggml_get_to_fp16_cuda(ggml_type type) {
|
||||||
|
switch (type) {
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
return dequantize_row_q4_0_cuda;
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
return dequantize_row_q4_1_cuda;
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
return dequantize_block_cuda<QK5_0, QR5_0, dequantize_q5_0>;
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
return dequantize_block_cuda<QK5_1, QR5_1, dequantize_q5_1>;
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
if (ggml_cuda_info().devices[ggml_cuda_get_device()].cc >= CC_PASCAL) {
|
||||||
|
return dequantize_block_q8_0_f16_cuda;
|
||||||
|
}
|
||||||
|
return dequantize_block_cuda<QK8_0, QR8_0, dequantize_q8_0>;
|
||||||
|
case GGML_TYPE_Q2_K:
|
||||||
|
return dequantize_row_q2_K_cuda;
|
||||||
|
case GGML_TYPE_Q3_K:
|
||||||
|
return dequantize_row_q3_K_cuda;
|
||||||
|
case GGML_TYPE_Q4_K:
|
||||||
|
return dequantize_row_q4_K_cuda;
|
||||||
|
case GGML_TYPE_Q5_K:
|
||||||
|
return dequantize_row_q5_K_cuda;
|
||||||
|
case GGML_TYPE_Q6_K:
|
||||||
|
return dequantize_row_q6_K_cuda;
|
||||||
|
case GGML_TYPE_IQ2_XXS:
|
||||||
|
return dequantize_row_iq2_xxs_cuda;
|
||||||
|
case GGML_TYPE_IQ2_XS:
|
||||||
|
return dequantize_row_iq2_xs_cuda;
|
||||||
|
case GGML_TYPE_IQ2_S:
|
||||||
|
return dequantize_row_iq2_s_cuda;
|
||||||
|
case GGML_TYPE_IQ3_XXS:
|
||||||
|
return dequantize_row_iq3_xxs_cuda;
|
||||||
|
case GGML_TYPE_IQ1_S:
|
||||||
|
return dequantize_row_iq1_s_cuda;
|
||||||
|
case GGML_TYPE_IQ1_M:
|
||||||
|
return dequantize_row_iq1_m_cuda;
|
||||||
|
case GGML_TYPE_IQ4_NL:
|
||||||
|
return dequantize_row_iq4_nl_cuda;
|
||||||
|
case GGML_TYPE_IQ4_XS:
|
||||||
|
return dequantize_row_iq4_xs_cuda;
|
||||||
|
case GGML_TYPE_IQ3_S:
|
||||||
|
return dequantize_row_iq3_s_cuda;
|
||||||
|
case GGML_TYPE_F32:
|
||||||
|
return convert_unary_cuda<float>;
|
||||||
|
default:
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
to_fp32_cuda_t ggml_get_to_fp32_cuda(ggml_type type) {
|
||||||
|
switch (type) {
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
return dequantize_row_q4_0_cuda;
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
return dequantize_row_q4_1_cuda;
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
return dequantize_block_cuda<QK5_0, QR5_0, dequantize_q5_0>;
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
return dequantize_block_cuda<QK5_1, QR5_1, dequantize_q5_1>;
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
return dequantize_block_cuda<QK8_0, QR8_0, dequantize_q8_0>;
|
||||||
|
case GGML_TYPE_Q2_K:
|
||||||
|
return dequantize_row_q2_K_cuda;
|
||||||
|
case GGML_TYPE_Q3_K:
|
||||||
|
return dequantize_row_q3_K_cuda;
|
||||||
|
case GGML_TYPE_Q4_K:
|
||||||
|
return dequantize_row_q4_K_cuda;
|
||||||
|
case GGML_TYPE_Q5_K:
|
||||||
|
return dequantize_row_q5_K_cuda;
|
||||||
|
case GGML_TYPE_Q6_K:
|
||||||
|
return dequantize_row_q6_K_cuda;
|
||||||
|
case GGML_TYPE_IQ2_XXS:
|
||||||
|
return dequantize_row_iq2_xxs_cuda;
|
||||||
|
case GGML_TYPE_IQ2_XS:
|
||||||
|
return dequantize_row_iq2_xs_cuda;
|
||||||
|
case GGML_TYPE_IQ2_S:
|
||||||
|
return dequantize_row_iq2_s_cuda;
|
||||||
|
case GGML_TYPE_IQ3_XXS:
|
||||||
|
return dequantize_row_iq3_xxs_cuda;
|
||||||
|
case GGML_TYPE_IQ1_S:
|
||||||
|
return dequantize_row_iq1_s_cuda;
|
||||||
|
case GGML_TYPE_IQ1_M:
|
||||||
|
return dequantize_row_iq1_m_cuda;
|
||||||
|
case GGML_TYPE_IQ4_NL:
|
||||||
|
return dequantize_row_iq4_nl_cuda;
|
||||||
|
case GGML_TYPE_IQ4_XS:
|
||||||
|
return dequantize_row_iq4_xs_cuda;
|
||||||
|
case GGML_TYPE_IQ3_S:
|
||||||
|
return dequantize_row_iq3_s_cuda;
|
||||||
|
case GGML_TYPE_F16:
|
||||||
|
return convert_unary_cuda<half>;
|
||||||
|
default:
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
}
|
39
llama/ggml-cuda/convert.cuh
Normal file
39
llama/ggml-cuda/convert.cuh
Normal file
|
@ -0,0 +1,39 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_DEQUANTIZE_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
using to_t_cuda_t = void (*)(const void * __restrict__ x, T * __restrict__ y, int64_t k, cudaStream_t stream);
|
||||||
|
|
||||||
|
typedef to_t_cuda_t<float> to_fp32_cuda_t;
|
||||||
|
typedef to_t_cuda_t<half> to_fp16_cuda_t;
|
||||||
|
|
||||||
|
to_fp16_cuda_t ggml_get_to_fp16_cuda(ggml_type type);
|
||||||
|
|
||||||
|
to_fp32_cuda_t ggml_get_to_fp32_cuda(ggml_type type);
|
515
llama/ggml-cuda/cpy.cu
Normal file
515
llama/ggml-cuda/cpy.cu
Normal file
|
@ -0,0 +1,515 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "cpy.cuh"
|
||||||
|
|
||||||
|
typedef void (*cpy_kernel_t)(const char * cx, char * cdst);
|
||||||
|
|
||||||
|
static __device__ void cpy_1_f32_f32(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
float * dsti = (float *) cdsti;
|
||||||
|
|
||||||
|
*dsti = *xi;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_1_f32_f16(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
half * dsti = (half *) cdsti;
|
||||||
|
|
||||||
|
*dsti = __float2half(*xi);
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_1_f16_f16(const char * cxi, char * cdsti) {
|
||||||
|
const half * xi = (const half *) cxi;
|
||||||
|
half * dsti = (half *) cdsti;
|
||||||
|
|
||||||
|
*dsti = *xi;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_1_f16_f32(const char * cxi, char * cdsti) {
|
||||||
|
const half * xi = (const half *) cxi;
|
||||||
|
float * dsti = (float *) cdsti;
|
||||||
|
|
||||||
|
*dsti = *xi;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <cpy_kernel_t cpy_1>
|
||||||
|
static __global__ void cpy_f32_f16(const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11,
|
||||||
|
const int nb12, const int nb13) {
|
||||||
|
const int64_t i = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (i >= ne) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// determine indices i03/i13, i02/i12, i01/i11, i00/i10 as a function of index i of flattened tensor
|
||||||
|
// then combine those indices with the corresponding byte offsets to get the total offsets
|
||||||
|
const int64_t i03 = i/(ne00 * ne01 * ne02);
|
||||||
|
const int64_t i02 = (i - i03*ne00*ne01*ne02 )/ (ne00*ne01);
|
||||||
|
const int64_t i01 = (i - i03*ne00*ne01*ne02 - i02*ne01*ne00) / ne00;
|
||||||
|
const int64_t i00 = i - i03*ne00*ne01*ne02 - i02*ne01*ne00 - i01*ne00;
|
||||||
|
const int64_t x_offset = i00*nb00 + i01*nb01 + i02*nb02 + i03 * nb03;
|
||||||
|
|
||||||
|
const int64_t i13 = i/(ne10 * ne11 * ne12);
|
||||||
|
const int64_t i12 = (i - i13*ne10*ne11*ne12) / (ne10*ne11);
|
||||||
|
const int64_t i11 = (i - i13*ne10*ne11*ne12 - i12*ne10*ne11) / ne10;
|
||||||
|
const int64_t i10 = i - i13*ne10*ne11*ne12 - i12*ne10*ne11 - i11*ne10;
|
||||||
|
const int64_t dst_offset = i10*nb10 + i11*nb11 + i12*nb12 + i13 * nb13;
|
||||||
|
|
||||||
|
cpy_1(cx + x_offset, cdst + dst_offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_blck_f32_q8_0(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
block_q8_0 * dsti = (block_q8_0 *) cdsti;
|
||||||
|
|
||||||
|
float amax = 0.0f; // absolute max
|
||||||
|
|
||||||
|
for (int j = 0; j < QK8_0; j++) {
|
||||||
|
const float v = xi[j];
|
||||||
|
amax = fmaxf(amax, fabsf(v));
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = amax / ((1 << 7) - 1);
|
||||||
|
const float id = d ? 1.0f/d : 0.0f;
|
||||||
|
|
||||||
|
dsti->d = d;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK8_0; ++j) {
|
||||||
|
const float x0 = xi[j]*id;
|
||||||
|
|
||||||
|
dsti->qs[j] = roundf(x0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_blck_f32_q4_0(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
block_q4_0 * dsti = (block_q4_0 *) cdsti;
|
||||||
|
|
||||||
|
float amax = 0.0f;
|
||||||
|
float vmax = 0.0f;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK4_0; ++j) {
|
||||||
|
const float v = xi[j];
|
||||||
|
if (amax < fabsf(v)) {
|
||||||
|
amax = fabsf(v);
|
||||||
|
vmax = v;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = vmax / -8;
|
||||||
|
const float id = d ? 1.0f/d : 0.0f;
|
||||||
|
|
||||||
|
dsti->d = d;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK4_0/2; ++j) {
|
||||||
|
const float x0 = xi[0 + j]*id;
|
||||||
|
const float x1 = xi[QK4_0/2 + j]*id;
|
||||||
|
|
||||||
|
const uint8_t xi0 = min(15, (int8_t)(x0 + 8.5f));
|
||||||
|
const uint8_t xi1 = min(15, (int8_t)(x1 + 8.5f));
|
||||||
|
|
||||||
|
dsti->qs[j] = xi0;
|
||||||
|
dsti->qs[j] |= xi1 << 4;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_blck_f32_q4_1(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
block_q4_1 * dsti = (block_q4_1 *) cdsti;
|
||||||
|
|
||||||
|
float vmin = FLT_MAX;
|
||||||
|
float vmax = -FLT_MAX;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK4_1; ++j) {
|
||||||
|
const float v = xi[j];
|
||||||
|
|
||||||
|
if (v < vmin) vmin = v;
|
||||||
|
if (v > vmax) vmax = v;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = (vmax - vmin) / ((1 << 4) - 1);
|
||||||
|
const float id = d ? 1.0f/d : 0.0f;
|
||||||
|
|
||||||
|
dsti->dm.x = d;
|
||||||
|
dsti->dm.y = vmin;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK4_1/2; ++j) {
|
||||||
|
const float x0 = (xi[0 + j] - vmin)*id;
|
||||||
|
const float x1 = (xi[QK4_1/2 + j] - vmin)*id;
|
||||||
|
|
||||||
|
const uint8_t xi0 = min(15, (int8_t)(x0 + 0.5f));
|
||||||
|
const uint8_t xi1 = min(15, (int8_t)(x1 + 0.5f));
|
||||||
|
|
||||||
|
dsti->qs[j] = xi0;
|
||||||
|
dsti->qs[j] |= xi1 << 4;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_blck_f32_q5_0(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
block_q5_0 * dsti = (block_q5_0 *) cdsti;
|
||||||
|
|
||||||
|
float amax = 0.0f;
|
||||||
|
float vmax = 0.0f;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK5_0; ++j) {
|
||||||
|
const float v = xi[j];
|
||||||
|
if (amax < fabsf(v)) {
|
||||||
|
amax = fabsf(v);
|
||||||
|
vmax = v;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = vmax / -16;
|
||||||
|
const float id = d ? 1.0f/d : 0.0f;
|
||||||
|
|
||||||
|
dsti->d = d;
|
||||||
|
|
||||||
|
uint32_t qh = 0;
|
||||||
|
for (int j = 0; j < QK5_0/2; ++j) {
|
||||||
|
const float x0 = xi[0 + j]*id;
|
||||||
|
const float x1 = xi[QK5_0/2 + j]*id;
|
||||||
|
|
||||||
|
const uint8_t xi0 = min(31, (int8_t)(x0 + 16.5f));
|
||||||
|
const uint8_t xi1 = min(31, (int8_t)(x1 + 16.5f));
|
||||||
|
|
||||||
|
dsti->qs[j] = (xi0 & 0xf) | ((xi1 & 0xf) << 4);
|
||||||
|
qh |= ((xi0 & 0x10u) >> 4) << (j + 0);
|
||||||
|
qh |= ((xi1 & 0x10u) >> 4) << (j + QK5_0/2);
|
||||||
|
}
|
||||||
|
memcpy(dsti->qh, &qh, sizeof(qh));
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_blck_f32_q5_1(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
block_q5_1 * dsti = (block_q5_1 *) cdsti;
|
||||||
|
|
||||||
|
float min = xi[0];
|
||||||
|
float max = xi[0];
|
||||||
|
|
||||||
|
for (int j = 1; j < QK5_1; ++j) {
|
||||||
|
const float v = xi[j];
|
||||||
|
min = v < min ? v : min;
|
||||||
|
max = v > max ? v : max;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = (max - min) / 31;
|
||||||
|
const float id = d ? 1.0f/d : 0.0f;
|
||||||
|
|
||||||
|
dsti->dm.x = d;
|
||||||
|
dsti->dm.y = min;
|
||||||
|
|
||||||
|
uint32_t qh = 0;
|
||||||
|
for (int j = 0; j < QK5_1/2; ++j) {
|
||||||
|
const float x0 = (xi[0 + j] - min)*id;
|
||||||
|
const float x1 = (xi[QK5_1/2 + j] - min)*id;
|
||||||
|
|
||||||
|
const uint8_t xi0 = (uint8_t)(x0 + 0.5f);
|
||||||
|
const uint8_t xi1 = (uint8_t)(x1 + 0.5f);
|
||||||
|
|
||||||
|
dsti->qs[j] = (xi0 & 0xf) | ((xi1 & 0xf) << 4);
|
||||||
|
qh |= ((xi0 & 0x10u) >> 4) << (j + 0);
|
||||||
|
qh |= ((xi1 & 0x10u) >> 4) << (j + QK5_1/2);
|
||||||
|
}
|
||||||
|
memcpy(dsti->qh, &qh, sizeof(qh));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int best_index_int8(int n, const int8_t * val, float x) {
|
||||||
|
if (x <= val[0]) return 0;
|
||||||
|
if (x >= val[n-1]) return n-1;
|
||||||
|
int ml = 0, mu = n-1;
|
||||||
|
while (mu-ml > 1) {
|
||||||
|
int mav = (ml+mu)/2;
|
||||||
|
if (x < val[mav]) mu = mav; else ml = mav;
|
||||||
|
}
|
||||||
|
return x - val[mu-1] < val[mu] - x ? mu-1 : mu;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void cpy_blck_f32_iq4_nl(const char * cxi, char * cdsti) {
|
||||||
|
const float * xi = (const float *) cxi;
|
||||||
|
block_iq4_nl * dsti = (block_iq4_nl *) cdsti;
|
||||||
|
|
||||||
|
float amax = 0.0f;
|
||||||
|
float vmax = 0.0f;
|
||||||
|
|
||||||
|
for (int j = 0; j < QK4_NL; ++j) {
|
||||||
|
const float v = xi[j];
|
||||||
|
if (amax < fabsf(v)) {
|
||||||
|
amax = fabsf(v);
|
||||||
|
vmax = v;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
float d = vmax / kvalues_iq4nl[0];
|
||||||
|
const float id = d ? 1.0f/d : 0.0f;
|
||||||
|
|
||||||
|
float sumqx = 0, sumq2 = 0;
|
||||||
|
for (int j = 0; j < QK4_NL/2; ++j) {
|
||||||
|
const float x0 = xi[0 + j]*id;
|
||||||
|
const float x1 = xi[QK4_NL/2 + j]*id;
|
||||||
|
const uint8_t xi0 = best_index_int8(16, kvalues_iq4nl, x0);
|
||||||
|
const uint8_t xi1 = best_index_int8(16, kvalues_iq4nl, x1);
|
||||||
|
dsti->qs[j] = xi0 | (xi1 << 4);
|
||||||
|
const float v0 = kvalues_iq4nl[xi0];
|
||||||
|
const float v1 = kvalues_iq4nl[xi1];
|
||||||
|
const float w0 = xi[0 + j]*xi[0 + j];
|
||||||
|
const float w1 = xi[QK4_NL/2 + j]*xi[QK4_NL/2 + j];
|
||||||
|
sumqx += w0*v0*xi[j] + w1*v1*xi[QK4_NL/2 + j];
|
||||||
|
sumq2 += w0*v0*v0 + w1*v1*v1;
|
||||||
|
}
|
||||||
|
|
||||||
|
dsti->d = sumq2 > 0 ? sumqx/sumq2 : d;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <cpy_kernel_t cpy_blck, int qk>
|
||||||
|
static __global__ void cpy_f32_q(const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11,
|
||||||
|
const int nb12, const int nb13) {
|
||||||
|
const int i = (blockDim.x*blockIdx.x + threadIdx.x)*qk;
|
||||||
|
|
||||||
|
if (i >= ne) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i03 = i/(ne00 * ne01 * ne02);
|
||||||
|
const int i02 = (i - i03*ne00*ne01*ne02 )/ (ne00*ne01);
|
||||||
|
const int i01 = (i - i03*ne00*ne01*ne02 - i02*ne01*ne00) / ne00;
|
||||||
|
const int i00 = i - i03*ne00*ne01*ne02 - i02*ne01*ne00 - i01*ne00;
|
||||||
|
const int x_offset = i00*nb00 + i01*nb01 + i02*nb02 + i03 * nb03;
|
||||||
|
|
||||||
|
const int i13 = i/(ne10 * ne11 * ne12);
|
||||||
|
const int i12 = (i - i13*ne10*ne11*ne12) / (ne10*ne11);
|
||||||
|
const int i11 = (i - i13*ne10*ne11*ne12 - i12*ne10*ne11) / ne10;
|
||||||
|
const int i10 = i - i13*ne10*ne11*ne12 - i12*ne10*ne11 - i11*ne10;
|
||||||
|
const int dst_offset = (i10/qk)*nb10 + i11*nb11 + i12*nb12 + i13*nb13;
|
||||||
|
|
||||||
|
cpy_blck(cx + x_offset, cdst + dst_offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f16_f32_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int num_blocks = (ne + CUDA_CPY_BLOCK_SIZE - 1) / CUDA_CPY_BLOCK_SIZE;
|
||||||
|
cpy_f32_f16<cpy_1_f16_f32><<<num_blocks, CUDA_CPY_BLOCK_SIZE, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_f32_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int num_blocks = (ne + CUDA_CPY_BLOCK_SIZE - 1) / CUDA_CPY_BLOCK_SIZE;
|
||||||
|
cpy_f32_f16<cpy_1_f32_f32><<<num_blocks, CUDA_CPY_BLOCK_SIZE, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_f16_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int num_blocks = (ne + CUDA_CPY_BLOCK_SIZE - 1) / CUDA_CPY_BLOCK_SIZE;
|
||||||
|
cpy_f32_f16<cpy_1_f32_f16><<<num_blocks, CUDA_CPY_BLOCK_SIZE, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_q8_0_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ne % QK8_0 == 0);
|
||||||
|
const int num_blocks = ne / QK8_0;
|
||||||
|
cpy_f32_q<cpy_blck_f32_q8_0, QK8_0><<<num_blocks, 1, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_q4_0_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ne % QK4_0 == 0);
|
||||||
|
const int num_blocks = ne / QK4_0;
|
||||||
|
cpy_f32_q<cpy_blck_f32_q4_0, QK4_0><<<num_blocks, 1, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_q4_1_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ne % QK4_1 == 0);
|
||||||
|
const int num_blocks = ne / QK4_1;
|
||||||
|
cpy_f32_q<cpy_blck_f32_q4_1, QK4_1><<<num_blocks, 1, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_q5_0_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ne % QK5_0 == 0);
|
||||||
|
const int num_blocks = ne / QK5_0;
|
||||||
|
cpy_f32_q<cpy_blck_f32_q5_0, QK5_0><<<num_blocks, 1, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_q5_1_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ne % QK5_1 == 0);
|
||||||
|
const int num_blocks = ne / QK5_1;
|
||||||
|
cpy_f32_q<cpy_blck_f32_q5_1, QK5_1><<<num_blocks, 1, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f32_iq4_nl_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ne % QK4_NL == 0);
|
||||||
|
const int num_blocks = ne / QK4_NL;
|
||||||
|
cpy_f32_q<cpy_blck_f32_iq4_nl, QK4_NL><<<num_blocks, 1, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ggml_cpy_f16_f16_cuda(
|
||||||
|
const char * cx, char * cdst, const int ne,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int nb00, const int nb01, const int nb02,
|
||||||
|
const int nb03, const int ne10, const int ne11, const int ne12, const int nb10, const int nb11, const int nb12, const int nb13, cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int num_blocks = (ne + CUDA_CPY_BLOCK_SIZE - 1) / CUDA_CPY_BLOCK_SIZE;
|
||||||
|
cpy_f32_f16<cpy_1_f16_f16><<<num_blocks, CUDA_CPY_BLOCK_SIZE, 0, stream>>>
|
||||||
|
(cx, cdst, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_cpy(ggml_backend_cuda_context & ctx, const ggml_tensor * src0, ggml_tensor * src1) {
|
||||||
|
const int64_t ne = ggml_nelements(src0);
|
||||||
|
GGML_ASSERT(ne == ggml_nelements(src1));
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX);
|
||||||
|
GGML_ASSERT(ggml_nbytes(src1) <= INT_MAX);
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t ne01 = src0->ne[1];
|
||||||
|
const int64_t ne02 = src0->ne[2];
|
||||||
|
|
||||||
|
//GGML_ASSERT(src0->ne[3] == 1);
|
||||||
|
|
||||||
|
const int64_t nb00 = src0->nb[0];
|
||||||
|
const int64_t nb01 = src0->nb[1];
|
||||||
|
const int64_t nb02 = src0->nb[2];
|
||||||
|
const int64_t nb03 = src0->nb[3];
|
||||||
|
|
||||||
|
const int64_t ne10 = src1->ne[0];
|
||||||
|
const int64_t ne11 = src1->ne[1];
|
||||||
|
const int64_t ne12 = src1->ne[2];
|
||||||
|
|
||||||
|
//GGML_ASSERT(src1->ne[3] == 1);
|
||||||
|
|
||||||
|
const int64_t nb10 = src1->nb[0];
|
||||||
|
const int64_t nb11 = src1->nb[1];
|
||||||
|
const int64_t nb12 = src1->nb[2];
|
||||||
|
const int64_t nb13 = src1->nb[3];
|
||||||
|
|
||||||
|
cudaStream_t main_stream = ctx.stream();
|
||||||
|
|
||||||
|
char * src0_ddc = (char *) src0->data;
|
||||||
|
char * src1_ddc = (char *) src1->data;
|
||||||
|
|
||||||
|
if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F32) {
|
||||||
|
ggml_cpy_f32_f32_cuda (src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F16) {
|
||||||
|
ggml_cpy_f32_f16_cuda (src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q8_0) {
|
||||||
|
ggml_cpy_f32_q8_0_cuda(src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q4_0) {
|
||||||
|
ggml_cpy_f32_q4_0_cuda(src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q4_1) {
|
||||||
|
ggml_cpy_f32_q4_1_cuda(src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q5_0) {
|
||||||
|
ggml_cpy_f32_q5_0_cuda(src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_IQ4_NL) {
|
||||||
|
ggml_cpy_f32_iq4_nl_cuda(src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q5_1) {
|
||||||
|
ggml_cpy_f32_q5_1_cuda(src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F16 && src1->type == GGML_TYPE_F16) {
|
||||||
|
ggml_cpy_f16_f16_cuda (src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else if (src0->type == GGML_TYPE_F16 && src1->type == GGML_TYPE_F32) {
|
||||||
|
ggml_cpy_f16_f32_cuda (src0_ddc, src1_ddc, ne, ne00, ne01, ne02, nb00, nb01, nb02, nb03, ne10, ne11, ne12, nb10, nb11, nb12, nb13, main_stream);
|
||||||
|
} else {
|
||||||
|
fprintf(stderr, "%s: unsupported type combination (%s to %s)\n", __func__,
|
||||||
|
ggml_type_name(src0->type), ggml_type_name(src1->type));
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_dup(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
ggml_cuda_cpy(ctx, src0, dst);
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ggml_cuda_cpy_fn(const ggml_tensor * src0, ggml_tensor * src1) {
|
||||||
|
if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F32) {
|
||||||
|
return (void*) cpy_f32_f16<cpy_1_f32_f32>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F16) {
|
||||||
|
return (void*) cpy_f32_f16<cpy_1_f32_f16>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q8_0) {
|
||||||
|
return (void*) cpy_f32_q<cpy_blck_f32_q8_0, QK8_0>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q4_0) {
|
||||||
|
return (void*) cpy_f32_q<cpy_blck_f32_q4_0, QK4_0>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q4_1) {
|
||||||
|
return (void*) cpy_f32_q<cpy_blck_f32_q4_1, QK4_1>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q5_0) {
|
||||||
|
return (void*) cpy_f32_q<cpy_blck_f32_q5_0, QK5_0>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_IQ4_NL) {
|
||||||
|
return (void*) cpy_f32_q<cpy_blck_f32_iq4_nl, QK4_NL>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_Q5_1) {
|
||||||
|
return (void*) cpy_f32_q<cpy_blck_f32_q5_1, QK5_1>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F16 && src1->type == GGML_TYPE_F16) {
|
||||||
|
return (void*) cpy_f32_f16<cpy_1_f32_f16>;
|
||||||
|
} else if (src0->type == GGML_TYPE_F16 && src1->type == GGML_TYPE_F32) {
|
||||||
|
return (void*) cpy_f32_f16<cpy_1_f16_f32>;
|
||||||
|
} else {
|
||||||
|
fprintf(stderr, "%s: unsupported type combination (%s to %s)\n", __func__,
|
||||||
|
ggml_type_name(src0->type), ggml_type_name(src1->type));
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
}
|
35
llama/ggml-cuda/cpy.cuh
Normal file
35
llama/ggml-cuda/cpy.cuh
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_CPY_BLOCK_SIZE 32
|
||||||
|
|
||||||
|
void ggml_cuda_cpy(ggml_backend_cuda_context & ctx, const ggml_tensor * src0, ggml_tensor * src1);
|
||||||
|
|
||||||
|
void ggml_cuda_dup(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
|
||||||
|
void* ggml_cuda_cpy_fn(const ggml_tensor * src0, ggml_tensor * src1);
|
132
llama/ggml-cuda/cross-entropy-loss.cu
Normal file
132
llama/ggml-cuda/cross-entropy-loss.cu
Normal file
|
@ -0,0 +1,132 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "cross-entropy-loss.cuh"
|
||||||
|
#include "sumrows.cuh"
|
||||||
|
|
||||||
|
#include <cmath>
|
||||||
|
#include <cstdint>
|
||||||
|
|
||||||
|
static __global__ void cross_entropy_loss_f32(const float * logits, const float * labels, float * dst, const int nclasses, const int k) {
|
||||||
|
const int warp_id = threadIdx.x / WARP_SIZE;
|
||||||
|
const int lane_id = threadIdx.x % WARP_SIZE;
|
||||||
|
const int i0 = blockDim.x*blockIdx.x + warp_id*WARP_SIZE;
|
||||||
|
|
||||||
|
const int ne_tmp = WARP_SIZE*nclasses;
|
||||||
|
|
||||||
|
extern __shared__ float tmp_all[];
|
||||||
|
float * tmp_logits = tmp_all + (2*warp_id + 0)*ne_tmp;
|
||||||
|
float * tmp_labels = tmp_all + (2*warp_id + 1)*ne_tmp;
|
||||||
|
|
||||||
|
// Each warp first loads ne_tmp logits/labels into shared memory:
|
||||||
|
for (int i = lane_id; i < ne_tmp; i += WARP_SIZE) {
|
||||||
|
const int ig = i0*nclasses + i; // ig == i global
|
||||||
|
|
||||||
|
tmp_logits[i] = ig < k*nclasses ? logits[ig] : 0.0f;
|
||||||
|
tmp_labels[i] = ig < k*nclasses ? labels[ig] : 0.0f;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Each thread in the warp then calculates the cross entropy loss for a single row.
|
||||||
|
// TODO: pad in order to avoid shared memory bank conflicts.
|
||||||
|
|
||||||
|
// Find maximum for softmax:
|
||||||
|
float max = -INFINITY;
|
||||||
|
for (int i = 0; i < nclasses; ++i) {
|
||||||
|
max = fmaxf(max, tmp_logits[lane_id*nclasses + i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate log(softmax(logits)) which is just logits - max:
|
||||||
|
float sum = 0.0f;
|
||||||
|
for (int i = 0; i < nclasses; ++i) {
|
||||||
|
float val = tmp_logits[lane_id*nclasses + i] - max;
|
||||||
|
sum += expf(val);
|
||||||
|
tmp_logits[lane_id*nclasses + i] = val;
|
||||||
|
}
|
||||||
|
sum = logf(sum);
|
||||||
|
|
||||||
|
// log(exp(logits - max) / sum) = (logits - max) - log(sum)
|
||||||
|
float loss = 0.0f;
|
||||||
|
for (int i = 0; i < nclasses; ++i) {
|
||||||
|
loss += (tmp_logits[lane_id*nclasses + i] - sum) * tmp_labels[lane_id*nclasses + i];
|
||||||
|
}
|
||||||
|
loss = -warp_reduce_sum(loss) / (float)k;
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
if (lane_id == 0) {
|
||||||
|
tmp_all[warp_id] = loss;
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
if (warp_id != 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
loss = lane_id < CUDA_CROSS_ENTROPY_LOSS_BLOCK_SIZE/WARP_SIZE ? tmp_all[lane_id] : 0.0f;
|
||||||
|
loss = warp_reduce_sum(loss);
|
||||||
|
|
||||||
|
if (lane_id != 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
dst[blockIdx.x] = loss;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_cross_entropy_loss(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src1));
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(dst));
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t nrows = ggml_nrows(src0);
|
||||||
|
|
||||||
|
const float * src0_d = (const float *) src0->data;
|
||||||
|
const float * src1_d = (const float *) src1->data;
|
||||||
|
float * dst_d = (float *) dst->data;
|
||||||
|
|
||||||
|
ggml_cuda_pool & pool = ctx.pool();
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
const dim3 blocks_dim(CUDA_CROSS_ENTROPY_LOSS_BLOCK_SIZE, 1, 1);
|
||||||
|
const dim3 blocks_num((nrows + CUDA_CROSS_ENTROPY_LOSS_BLOCK_SIZE - 1) / CUDA_CROSS_ENTROPY_LOSS_BLOCK_SIZE, 1, 1);
|
||||||
|
const int shmem = 2*CUDA_CROSS_ENTROPY_LOSS_BLOCK_SIZE*ne00*sizeof(float);
|
||||||
|
|
||||||
|
ggml_cuda_pool_alloc<float> dst_tmp(pool, blocks_num.x);
|
||||||
|
|
||||||
|
cross_entropy_loss_f32<<<blocks_num, blocks_dim, shmem, stream>>>(src0_d, src1_d, dst_tmp.ptr, ne00, nrows);
|
||||||
|
|
||||||
|
// Combine results from individual blocks:
|
||||||
|
sum_rows_f32_cuda(dst_tmp.ptr, dst_d, blocks_num.x, 1, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/cross-entropy-loss.cuh
Normal file
31
llama/ggml-cuda/cross-entropy-loss.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_CROSS_ENTROPY_LOSS_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_cross_entropy_loss(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
129
llama/ggml-cuda/dequantize.cuh
Normal file
129
llama/ggml-cuda/dequantize.cuh
Normal file
|
@ -0,0 +1,129 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
static __device__ __forceinline__ void dequantize_q4_0(const void * vx, const int64_t ib, const int iqs, dfloat2 & v){
|
||||||
|
const block_q4_0 * x = (const block_q4_0 *) vx;
|
||||||
|
|
||||||
|
const dfloat d = x[ib].d;
|
||||||
|
|
||||||
|
const int vui = x[ib].qs[iqs];
|
||||||
|
|
||||||
|
v.x = vui & 0xF;
|
||||||
|
v.y = vui >> 4;
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
v = __hsub2(v, {8.0f, 8.0f});
|
||||||
|
v = __hmul2(v, {d, d});
|
||||||
|
#else
|
||||||
|
v.x = (v.x - 8.0f) * d;
|
||||||
|
v.y = (v.y - 8.0f) * d;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ void dequantize_q4_1(const void * vx, const int64_t ib, const int iqs, dfloat2 & v){
|
||||||
|
const block_q4_1 * x = (const block_q4_1 *) vx;
|
||||||
|
|
||||||
|
const dfloat d = __low2half(x[ib].dm);
|
||||||
|
const dfloat m = __high2half(x[ib].dm);
|
||||||
|
|
||||||
|
const int vui = x[ib].qs[iqs];
|
||||||
|
|
||||||
|
v.x = vui & 0xF;
|
||||||
|
v.y = vui >> 4;
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
v = __hmul2(v, {d, d});
|
||||||
|
v = __hadd2(v, {m, m});
|
||||||
|
#else
|
||||||
|
v.x = (v.x * d) + m;
|
||||||
|
v.y = (v.y * d) + m;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ void dequantize_q5_0(const void * vx, const int64_t ib, const int iqs, dfloat2 & v){
|
||||||
|
const block_q5_0 * x = (const block_q5_0 *) vx;
|
||||||
|
|
||||||
|
const dfloat d = x[ib].d;
|
||||||
|
|
||||||
|
uint32_t qh;
|
||||||
|
memcpy(&qh, x[ib].qh, sizeof(qh));
|
||||||
|
|
||||||
|
const int xh_0 = ((qh >> (iqs + 0)) << 4) & 0x10;
|
||||||
|
const int xh_1 = ((qh >> (iqs + 12)) ) & 0x10;
|
||||||
|
|
||||||
|
v.x = ((x[ib].qs[iqs] & 0xf) | xh_0);
|
||||||
|
v.y = ((x[ib].qs[iqs] >> 4) | xh_1);
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
v = __hsub2(v, {16.0f, 16.0f});
|
||||||
|
v = __hmul2(v, {d, d});
|
||||||
|
#else
|
||||||
|
v.x = (v.x - 16.0f) * d;
|
||||||
|
v.y = (v.y - 16.0f) * d;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ void dequantize_q5_1(const void * vx, const int64_t ib, const int iqs, dfloat2 & v){
|
||||||
|
const block_q5_1 * x = (const block_q5_1 *) vx;
|
||||||
|
|
||||||
|
const dfloat d = __low2half(x[ib].dm);
|
||||||
|
const dfloat m = __high2half(x[ib].dm);
|
||||||
|
|
||||||
|
uint32_t qh;
|
||||||
|
memcpy(&qh, x[ib].qh, sizeof(qh));
|
||||||
|
|
||||||
|
const int xh_0 = ((qh >> (iqs + 0)) << 4) & 0x10;
|
||||||
|
const int xh_1 = ((qh >> (iqs + 12)) ) & 0x10;
|
||||||
|
|
||||||
|
v.x = ((x[ib].qs[iqs] & 0xf) | xh_0);
|
||||||
|
v.y = ((x[ib].qs[iqs] >> 4) | xh_1);
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
v = __hmul2(v, {d, d});
|
||||||
|
v = __hadd2(v, {m, m});
|
||||||
|
#else
|
||||||
|
v.x = (v.x * d) + m;
|
||||||
|
v.y = (v.y * d) + m;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ void dequantize_q8_0(const void * vx, const int64_t ib, const int iqs, dfloat2 & v){
|
||||||
|
const block_q8_0 * x = (const block_q8_0 *) vx;
|
||||||
|
|
||||||
|
const dfloat d = x[ib].d;
|
||||||
|
|
||||||
|
v.x = x[ib].qs[iqs + 0];
|
||||||
|
v.y = x[ib].qs[iqs + 1];
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
v = __hmul2(v, {d, d});
|
||||||
|
#else
|
||||||
|
v.x *= d;
|
||||||
|
v.y *= d;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
66
llama/ggml-cuda/diagmask.cu
Normal file
66
llama/ggml-cuda/diagmask.cu
Normal file
|
@ -0,0 +1,66 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "diagmask.cuh"
|
||||||
|
|
||||||
|
static __global__ void diag_mask_inf_f32(const float * x, float * dst, const int ncols, const int rows_per_channel, const int n_past) {
|
||||||
|
const int col = blockDim.y*blockIdx.y + threadIdx.y;
|
||||||
|
const int row = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (col >= ncols) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i = row*ncols + col;
|
||||||
|
//dst[i] = col > (n_past + row % rows_per_channel) ? -INFINITY : x[i];
|
||||||
|
//dst[i] = x[i] - (col > n_past + row % rows_per_channel) * INT_MAX; // equivalent within rounding error but slightly faster on GPU
|
||||||
|
dst[i] = x[i] - (col > n_past + row % rows_per_channel) * FLT_MAX;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void diag_mask_inf_f32_cuda(const float * x, float * dst, const int ncols_x, const int nrows_x, const int rows_per_channel, const int n_past, cudaStream_t stream) {
|
||||||
|
const dim3 block_dims(1, CUDA_DIAG_MASK_INF_BLOCK_SIZE, 1);
|
||||||
|
const int block_num_x = (ncols_x + CUDA_DIAG_MASK_INF_BLOCK_SIZE - 1) / CUDA_DIAG_MASK_INF_BLOCK_SIZE;
|
||||||
|
const dim3 block_nums(nrows_x, block_num_x, 1);
|
||||||
|
diag_mask_inf_f32<<<block_nums, block_dims, 0, stream>>>(x, dst, ncols_x, rows_per_channel, n_past);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_diag_mask_inf(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t ne01 = src0->ne[1];
|
||||||
|
const int nrows0 = ggml_nrows(src0);
|
||||||
|
|
||||||
|
const int n_past = ((int32_t *) dst->op_params)[0];
|
||||||
|
|
||||||
|
diag_mask_inf_f32_cuda(src0_d, dst_d, ne00, nrows0, ne01, n_past, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/diagmask.cuh
Normal file
31
llama/ggml-cuda/diagmask.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_DIAG_MASK_INF_BLOCK_SIZE 32
|
||||||
|
|
||||||
|
void ggml_cuda_op_diag_mask_inf(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
709
llama/ggml-cuda/dmmv.cu
Normal file
709
llama/ggml-cuda/dmmv.cu
Normal file
|
@ -0,0 +1,709 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "dmmv.cuh"
|
||||||
|
#include "dequantize.cuh"
|
||||||
|
#include "convert.cuh"
|
||||||
|
|
||||||
|
#ifndef K_QUANTS_PER_ITERATION
|
||||||
|
#define K_QUANTS_PER_ITERATION 2
|
||||||
|
#else
|
||||||
|
static_assert(K_QUANTS_PER_ITERATION == 1 || K_QUANTS_PER_ITERATION == 2, "K_QUANTS_PER_ITERATION must be 1 or 2");
|
||||||
|
#endif
|
||||||
|
|
||||||
|
static __global__ void dequantize_mul_mat_vec_q2_k(const void * __restrict__ vx, const float * __restrict__ yy, float * __restrict__ dst, const int ncols, int nrows) {
|
||||||
|
|
||||||
|
static_assert(16%K_QUANTS_PER_ITERATION == 0, "16 must be divisible by K_QUANTS_PER_ITERATION");
|
||||||
|
|
||||||
|
const int row = blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
if (row > nrows) return;
|
||||||
|
|
||||||
|
const int num_blocks_per_row = ncols / QK_K;
|
||||||
|
const int ib0 = row*num_blocks_per_row;
|
||||||
|
|
||||||
|
const block_q2_K * x = (const block_q2_K *)vx + ib0;
|
||||||
|
|
||||||
|
float tmp = 0; // partial sum for thread in warp
|
||||||
|
|
||||||
|
const int tid = threadIdx.x/K_QUANTS_PER_ITERATION; // 0...31 or 0...15
|
||||||
|
const int ix = threadIdx.x%K_QUANTS_PER_ITERATION; // 0 or 0,1
|
||||||
|
|
||||||
|
const int step = 16/K_QUANTS_PER_ITERATION;
|
||||||
|
|
||||||
|
const int im = tid/step; // 0 or 1. 0 computes 0..., 1 computes 128...
|
||||||
|
const int in = tid - step*im; // 0...15 or 0...7
|
||||||
|
|
||||||
|
const int l0 = K_QUANTS_PER_ITERATION*in; // 0...15 or 0...14 in steps of 2
|
||||||
|
const int q_offset = 32*im + l0;
|
||||||
|
const int s_offset = 8*im;
|
||||||
|
const int y_offset = 128*im + l0;
|
||||||
|
|
||||||
|
uint32_t aux[4];
|
||||||
|
const uint8_t * d = (const uint8_t *)aux;
|
||||||
|
const uint8_t * m = (const uint8_t *)(aux + 2);
|
||||||
|
|
||||||
|
for (int i = ix; i < num_blocks_per_row; i += K_QUANTS_PER_ITERATION) {
|
||||||
|
|
||||||
|
const float * y = yy + i * QK_K + y_offset;
|
||||||
|
const uint8_t * q = x[i].qs + q_offset;
|
||||||
|
|
||||||
|
const float dall = __low2half(x[i].dm);
|
||||||
|
const float dmin = __high2half(x[i].dm);
|
||||||
|
|
||||||
|
const uint32_t * a = (const uint32_t *)(x[i].scales + s_offset);
|
||||||
|
aux[0] = a[0] & 0x0f0f0f0f;
|
||||||
|
aux[1] = a[1] & 0x0f0f0f0f;
|
||||||
|
aux[2] = (a[0] >> 4) & 0x0f0f0f0f;
|
||||||
|
aux[3] = (a[1] >> 4) & 0x0f0f0f0f;
|
||||||
|
|
||||||
|
float sum1 = 0, sum2 = 0;
|
||||||
|
for (int l = 0; l < K_QUANTS_PER_ITERATION; ++l) {
|
||||||
|
sum1 += y[l+ 0] * d[0] * ((q[l+ 0] >> 0) & 3)
|
||||||
|
+ y[l+32] * d[2] * ((q[l+ 0] >> 2) & 3)
|
||||||
|
+ y[l+64] * d[4] * ((q[l+ 0] >> 4) & 3)
|
||||||
|
+ y[l+96] * d[6] * ((q[l+ 0] >> 6) & 3)
|
||||||
|
+ y[l+16] * d[1] * ((q[l+16] >> 0) & 3)
|
||||||
|
+ y[l+48] * d[3] * ((q[l+16] >> 2) & 3)
|
||||||
|
+ y[l+80] * d[5] * ((q[l+16] >> 4) & 3)
|
||||||
|
+y[l+112] * d[7] * ((q[l+16] >> 6) & 3);
|
||||||
|
sum2 += y[l+ 0] * m[0] + y[l+32] * m[2] + y[l+64] * m[4] + y[ l+96] * m[6]
|
||||||
|
+ y[l+16] * m[1] + y[l+48] * m[3] + y[l+80] * m[5] + y[l+112] * m[7];
|
||||||
|
|
||||||
|
}
|
||||||
|
tmp += dall * sum1 - dmin * sum2;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
dst[row] = tmp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __global__ void dequantize_mul_mat_vec_q3_k(const void * __restrict__ vx, const float * __restrict__ yy, float * __restrict__ dst, const int ncols, int nrows) {
|
||||||
|
|
||||||
|
const int row = blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
if (row > nrows) return;
|
||||||
|
|
||||||
|
const int num_blocks_per_row = ncols / QK_K;
|
||||||
|
const int ib0 = row*num_blocks_per_row;
|
||||||
|
|
||||||
|
const block_q3_K * x = (const block_q3_K *)vx + ib0;
|
||||||
|
|
||||||
|
float tmp = 0; // partial sum for thread in warp
|
||||||
|
|
||||||
|
const uint16_t kmask1 = 0x0303;
|
||||||
|
const uint16_t kmask2 = 0x0f0f;
|
||||||
|
|
||||||
|
const int tid = threadIdx.x/K_QUANTS_PER_ITERATION; // 0...31 or 0...16
|
||||||
|
const int ix = threadIdx.x%K_QUANTS_PER_ITERATION; // 0 or 0,1
|
||||||
|
|
||||||
|
const int n = K_QUANTS_PER_ITERATION; // iterations in the inner loop
|
||||||
|
const int step = 16/K_QUANTS_PER_ITERATION;
|
||||||
|
const int im = tid/step; // 0 or 1. 0 computes 0..., 1 computes 128...
|
||||||
|
const int in = tid - step*im; // 0....15 or 0...7
|
||||||
|
|
||||||
|
const uint8_t m = 1 << (4*im);
|
||||||
|
|
||||||
|
const int l0 = n*in; // 0...15 or 0...14 in steps of 2
|
||||||
|
const int q_offset = 32*im + l0;
|
||||||
|
const int y_offset = 128*im + l0;
|
||||||
|
|
||||||
|
uint16_t utmp[4];
|
||||||
|
const int8_t * s = (const int8_t *)utmp;
|
||||||
|
|
||||||
|
const uint16_t s_shift = 4*im;
|
||||||
|
|
||||||
|
for (int i = ix; i < num_blocks_per_row; i += K_QUANTS_PER_ITERATION) {
|
||||||
|
|
||||||
|
const float * y = yy + i * QK_K + y_offset;
|
||||||
|
const uint8_t * q = x[i].qs + q_offset;
|
||||||
|
const uint8_t * h = x[i].hmask + l0;
|
||||||
|
|
||||||
|
const uint16_t * a = (const uint16_t *)x[i].scales;
|
||||||
|
utmp[0] = ((a[0] >> s_shift) & kmask2) | (((a[4] >> (s_shift + 0)) & kmask1) << 4);
|
||||||
|
utmp[1] = ((a[1] >> s_shift) & kmask2) | (((a[5] >> (s_shift + 0)) & kmask1) << 4);
|
||||||
|
utmp[2] = ((a[2] >> s_shift) & kmask2) | (((a[4] >> (s_shift + 2)) & kmask1) << 4);
|
||||||
|
utmp[3] = ((a[3] >> s_shift) & kmask2) | (((a[5] >> (s_shift + 2)) & kmask1) << 4);
|
||||||
|
|
||||||
|
const float d = x[i].d;
|
||||||
|
|
||||||
|
float sum = 0;
|
||||||
|
for (int l = 0; l < n; ++l) {
|
||||||
|
sum += y[l+ 0] * (s[0] - 32) * (((q[l] >> 0) & 3) - (h[l] & (m << 0) ? 0 : 4))
|
||||||
|
+ y[l+32] * (s[2] - 32) * (((q[l] >> 2) & 3) - (h[l] & (m << 1) ? 0 : 4))
|
||||||
|
+ y[l+64] * (s[4] - 32) * (((q[l] >> 4) & 3) - (h[l] & (m << 2) ? 0 : 4))
|
||||||
|
+ y[l+96] * (s[6] - 32) * (((q[l] >> 6) & 3) - (h[l] & (m << 3) ? 0 : 4));
|
||||||
|
sum += y[l+16] * (s[1] - 32) * (((q[l+16] >> 0) & 3) - (h[l+16] & (m << 0) ? 0 : 4))
|
||||||
|
+ y[l+48] * (s[3] - 32) * (((q[l+16] >> 2) & 3) - (h[l+16] & (m << 1) ? 0 : 4))
|
||||||
|
+ y[l+80] * (s[5] - 32) * (((q[l+16] >> 4) & 3) - (h[l+16] & (m << 2) ? 0 : 4))
|
||||||
|
+ y[l+112] * (s[7] - 32) * (((q[l+16] >> 6) & 3) - (h[l+16] & (m << 3) ? 0 : 4));
|
||||||
|
}
|
||||||
|
tmp += d * sum;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
dst[row] = tmp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __global__ void dequantize_mul_mat_vec_q4_k(const void * __restrict__ vx, const float * __restrict__ yy, float * __restrict__ dst, const int ncols, int nrows) {
|
||||||
|
|
||||||
|
const int row = blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
if (row > nrows) return;
|
||||||
|
const int num_blocks_per_row = ncols / QK_K;
|
||||||
|
const int ib0 = row*num_blocks_per_row;
|
||||||
|
|
||||||
|
const block_q4_K * x = (const block_q4_K *)vx + ib0;
|
||||||
|
|
||||||
|
const uint16_t kmask1 = 0x3f3f;
|
||||||
|
const uint16_t kmask2 = 0x0f0f;
|
||||||
|
const uint16_t kmask3 = 0xc0c0;
|
||||||
|
|
||||||
|
const int tid = threadIdx.x/K_QUANTS_PER_ITERATION; // 0...31 or 0...16
|
||||||
|
const int ix = threadIdx.x%K_QUANTS_PER_ITERATION; // 0 or 0,1
|
||||||
|
|
||||||
|
const int step = 8/K_QUANTS_PER_ITERATION; // 8 or 4
|
||||||
|
|
||||||
|
const int il = tid/step; // 0...3
|
||||||
|
const int ir = tid - step*il; // 0...7 or 0...3
|
||||||
|
const int n = 2 * K_QUANTS_PER_ITERATION; // 2 or 4
|
||||||
|
|
||||||
|
const int im = il/2; // 0 or 1. 0 computes 0,32 + 128,160, 1 computes 64,96 + 192,224
|
||||||
|
const int in = il%2;
|
||||||
|
|
||||||
|
const int l0 = n*(2*ir + in);
|
||||||
|
const int q_offset = 32*im + l0;
|
||||||
|
const int y_offset = 64*im + l0;
|
||||||
|
|
||||||
|
uint16_t aux[4];
|
||||||
|
const uint8_t * sc = (const uint8_t *)aux;
|
||||||
|
|
||||||
|
#if K_QUANTS_PER_ITERATION == 2
|
||||||
|
uint32_t q32[4];
|
||||||
|
const uint8_t * q4 = (const uint8_t *)q32;
|
||||||
|
#else
|
||||||
|
uint16_t q16[4];
|
||||||
|
const uint8_t * q4 = (const uint8_t *)q16;
|
||||||
|
#endif
|
||||||
|
|
||||||
|
float tmp = 0; // partial sum for thread in warp
|
||||||
|
|
||||||
|
for (int i = ix; i < num_blocks_per_row; i += K_QUANTS_PER_ITERATION) {
|
||||||
|
|
||||||
|
const float * y1 = yy + i*QK_K + y_offset;
|
||||||
|
const float * y2 = y1 + 128;
|
||||||
|
|
||||||
|
const float dall = __low2half(x[i].dm);
|
||||||
|
const float dmin = __high2half(x[i].dm);
|
||||||
|
|
||||||
|
const uint16_t * a = (const uint16_t *)x[i].scales;
|
||||||
|
aux[0] = a[im+0] & kmask1;
|
||||||
|
aux[1] = a[im+2] & kmask1;
|
||||||
|
aux[2] = ((a[im+4] >> 0) & kmask2) | ((a[im+0] & kmask3) >> 2);
|
||||||
|
aux[3] = ((a[im+4] >> 4) & kmask2) | ((a[im+2] & kmask3) >> 2);
|
||||||
|
|
||||||
|
#if K_QUANTS_PER_ITERATION == 2
|
||||||
|
const uint32_t * q1 = (const uint32_t *)(x[i].qs + q_offset);
|
||||||
|
const uint32_t * q2 = q1 + 16;
|
||||||
|
|
||||||
|
q32[0] = q1[0] & 0x0f0f0f0f;
|
||||||
|
q32[1] = q1[0] & 0xf0f0f0f0;
|
||||||
|
q32[2] = q2[0] & 0x0f0f0f0f;
|
||||||
|
q32[3] = q2[0] & 0xf0f0f0f0;
|
||||||
|
|
||||||
|
float4 s = {0.f, 0.f, 0.f, 0.f};
|
||||||
|
float smin = 0;
|
||||||
|
for (int l = 0; l < 4; ++l) {
|
||||||
|
s.x += y1[l] * q4[l+0]; s.y += y1[l+32] * q4[l+ 4];
|
||||||
|
s.z += y2[l] * q4[l+8]; s.w += y2[l+32] * q4[l+12];
|
||||||
|
smin += y1[l] * sc[2] + y1[l+32] * sc[3] + y2[l] * sc[6] + y2[l+32] * sc[7];
|
||||||
|
}
|
||||||
|
tmp += dall * (s.x * sc[0] + s.y * sc[1] * 1.f/16.f + s.z * sc[4] + s.w * sc[5] * 1.f/16.f) - dmin * smin;
|
||||||
|
#else
|
||||||
|
const uint16_t * q1 = (const uint16_t *)(x[i].qs + q_offset);
|
||||||
|
const uint16_t * q2 = q1 + 32;
|
||||||
|
|
||||||
|
q16[0] = q1[0] & 0x0f0f;
|
||||||
|
q16[1] = q1[0] & 0xf0f0;
|
||||||
|
q16[2] = q2[0] & 0x0f0f;
|
||||||
|
q16[3] = q2[0] & 0xf0f0;
|
||||||
|
|
||||||
|
float4 s = {0.f, 0.f, 0.f, 0.f};
|
||||||
|
float smin = 0;
|
||||||
|
for (int l = 0; l < 2; ++l) {
|
||||||
|
s.x += y1[l] * q4[l+0]; s.y += y1[l+32] * q4[l+2];
|
||||||
|
s.z += y2[l] * q4[l+4]; s.w += y2[l+32] * q4[l+6];
|
||||||
|
smin += y1[l] * sc[2] + y1[l+32] * sc[3] + y2[l] * sc[6] + y2[l+32] * sc[7];
|
||||||
|
}
|
||||||
|
tmp += dall * (s.x * sc[0] + s.y * sc[1] * 1.f/16.f + s.z * sc[4] + s.w * sc[5] * 1.f/16.f) - dmin * smin;
|
||||||
|
#endif
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
|
||||||
|
if (tid == 0) {
|
||||||
|
dst[row] = tmp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __global__ void dequantize_mul_mat_vec_q5_k(const void * __restrict__ vx, const float * __restrict__ yy, float * __restrict__ dst, const int ncols) {
|
||||||
|
|
||||||
|
const int row = blockIdx.x;
|
||||||
|
const int num_blocks_per_row = ncols / QK_K;
|
||||||
|
const int ib0 = row*num_blocks_per_row;
|
||||||
|
|
||||||
|
const block_q5_K * x = (const block_q5_K *)vx + ib0;
|
||||||
|
|
||||||
|
float tmp = 0; // partial sum for thread in warp
|
||||||
|
|
||||||
|
const uint16_t kmask1 = 0x3f3f;
|
||||||
|
const uint16_t kmask2 = 0x0f0f;
|
||||||
|
const uint16_t kmask3 = 0xc0c0;
|
||||||
|
|
||||||
|
const int tid = threadIdx.x/2; // 0...15
|
||||||
|
const int ix = threadIdx.x%2;
|
||||||
|
|
||||||
|
const int il = tid/4; // 0...3
|
||||||
|
const int ir = tid - 4*il;// 0...3
|
||||||
|
const int n = 2;
|
||||||
|
|
||||||
|
const int im = il/2; // 0 or 1. 0 computes 0,32 + 128,160, 1 computes 64,96 + 192,224
|
||||||
|
const int in = il%2;
|
||||||
|
|
||||||
|
const int l0 = n*(2*ir + in);
|
||||||
|
const int q_offset = 32*im + l0;
|
||||||
|
const int y_offset = 64*im + l0;
|
||||||
|
|
||||||
|
const uint8_t hm1 = 1 << (2*im);
|
||||||
|
const uint8_t hm2 = hm1 << 4;
|
||||||
|
|
||||||
|
uint16_t aux[4];
|
||||||
|
const uint8_t * sc = (const uint8_t *)aux;
|
||||||
|
|
||||||
|
uint16_t q16[8];
|
||||||
|
const uint8_t * q4 = (const uint8_t *)q16;
|
||||||
|
|
||||||
|
for (int i = ix; i < num_blocks_per_row; i += 2) {
|
||||||
|
|
||||||
|
const uint8_t * ql1 = x[i].qs + q_offset;
|
||||||
|
const uint8_t * qh = x[i].qh + l0;
|
||||||
|
const float * y1 = yy + i*QK_K + y_offset;
|
||||||
|
const float * y2 = y1 + 128;
|
||||||
|
|
||||||
|
const float dall = __low2half(x[i].dm);
|
||||||
|
const float dmin = __high2half(x[i].dm);
|
||||||
|
|
||||||
|
const uint16_t * a = (const uint16_t *)x[i].scales;
|
||||||
|
aux[0] = a[im+0] & kmask1;
|
||||||
|
aux[1] = a[im+2] & kmask1;
|
||||||
|
aux[2] = ((a[im+4] >> 0) & kmask2) | ((a[im+0] & kmask3) >> 2);
|
||||||
|
aux[3] = ((a[im+4] >> 4) & kmask2) | ((a[im+2] & kmask3) >> 2);
|
||||||
|
|
||||||
|
float4 sum = {0.f, 0.f, 0.f, 0.f};
|
||||||
|
float smin = 0;
|
||||||
|
const uint16_t * q1 = (const uint16_t *)ql1;
|
||||||
|
const uint16_t * q2 = q1 + 32;
|
||||||
|
q16[0] = q1[0] & 0x0f0f;
|
||||||
|
q16[1] = q1[8] & 0x0f0f;
|
||||||
|
q16[2] = (q1[0] >> 4) & 0x0f0f;
|
||||||
|
q16[3] = (q1[8] >> 4) & 0x0f0f;
|
||||||
|
q16[4] = q2[0] & 0x0f0f;
|
||||||
|
q16[5] = q2[8] & 0x0f0f;
|
||||||
|
q16[6] = (q2[0] >> 4) & 0x0f0f;
|
||||||
|
q16[7] = (q2[8] >> 4) & 0x0f0f;
|
||||||
|
for (int l = 0; l < n; ++l) {
|
||||||
|
sum.x += y1[l+ 0] * (q4[l +0] + (qh[l+ 0] & (hm1 << 0) ? 16 : 0))
|
||||||
|
+ y1[l+16] * (q4[l +2] + (qh[l+16] & (hm1 << 0) ? 16 : 0));
|
||||||
|
sum.y += y1[l+32] * (q4[l +4] + (qh[l+ 0] & (hm1 << 1) ? 16 : 0))
|
||||||
|
+ y1[l+48] * (q4[l +6] + (qh[l+16] & (hm1 << 1) ? 16 : 0));
|
||||||
|
sum.z += y2[l+ 0] * (q4[l +8] + (qh[l+ 0] & (hm2 << 0) ? 16 : 0))
|
||||||
|
+ y2[l+16] * (q4[l+10] + (qh[l+16] & (hm2 << 0) ? 16 : 0));
|
||||||
|
sum.w += y2[l+32] * (q4[l+12] + (qh[l+ 0] & (hm2 << 1) ? 16 : 0))
|
||||||
|
+ y2[l+48] * (q4[l+14] + (qh[l+16] & (hm2 << 1) ? 16 : 0));
|
||||||
|
smin += (y1[l] + y1[l+16]) * sc[2] + (y1[l+32] + y1[l+48]) * sc[3]
|
||||||
|
+ (y2[l] + y2[l+16]) * sc[6] + (y2[l+32] + y2[l+48]) * sc[7];
|
||||||
|
}
|
||||||
|
tmp += dall * (sum.x * sc[0] + sum.y * sc[1] + sum.z * sc[4] + sum.w * sc[5]) - dmin * smin;
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
dst[row] = tmp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __global__ void dequantize_mul_mat_vec_q6_k(const void * __restrict__ vx, const float * __restrict__ yy, float * __restrict__ dst, const int ncols, int nrows) {
|
||||||
|
|
||||||
|
static_assert(16%K_QUANTS_PER_ITERATION == 0, "16 must be divisible by K_QUANTS_PER_ITERATION");
|
||||||
|
|
||||||
|
const int row = blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
if (row > nrows) return;
|
||||||
|
|
||||||
|
const int num_blocks_per_row = ncols / QK_K;
|
||||||
|
const int ib0 = row*num_blocks_per_row;
|
||||||
|
|
||||||
|
const block_q6_K * x = (const block_q6_K *)vx + ib0;
|
||||||
|
|
||||||
|
const int tid = threadIdx.x/K_QUANTS_PER_ITERATION; // 0...31 or 0...16
|
||||||
|
const int ix = threadIdx.x%K_QUANTS_PER_ITERATION; // 0 or 0, 1
|
||||||
|
|
||||||
|
const int step = 16/K_QUANTS_PER_ITERATION; // 16 or 8
|
||||||
|
|
||||||
|
const int im = tid/step; // 0 or 1. 0 computes 0..., 1 computes 128...
|
||||||
|
const int in = tid - step*im; // 0...15 or 0...7
|
||||||
|
|
||||||
|
#if K_QUANTS_PER_ITERATION == 1
|
||||||
|
const int l0 = K_QUANTS_PER_ITERATION*in; // 0...15
|
||||||
|
const int is = 0;
|
||||||
|
#else
|
||||||
|
const int l0 = 4 * in; // 0, 4, 8, ..., 28
|
||||||
|
const int is = in / 4;
|
||||||
|
#endif
|
||||||
|
const int ql_offset = 64*im + l0;
|
||||||
|
const int qh_offset = 32*im + l0;
|
||||||
|
const int s_offset = 8*im + is;
|
||||||
|
const int y_offset = 128*im + l0;
|
||||||
|
|
||||||
|
float tmp = 0; // partial sum for thread in warp
|
||||||
|
|
||||||
|
for (int i = ix; i < num_blocks_per_row; i += K_QUANTS_PER_ITERATION) {
|
||||||
|
|
||||||
|
const float * y = yy + i * QK_K + y_offset;
|
||||||
|
const uint8_t * ql = x[i].ql + ql_offset;
|
||||||
|
const uint8_t * qh = x[i].qh + qh_offset;
|
||||||
|
const int8_t * s = x[i].scales + s_offset;
|
||||||
|
|
||||||
|
const float d = x[i].d;
|
||||||
|
|
||||||
|
#if K_QUANTS_PER_ITERATION == 1
|
||||||
|
float sum = y[ 0] * s[0] * d * ((int8_t)((ql[ 0] & 0xF) | ((qh[ 0] & 0x03) << 4)) - 32)
|
||||||
|
+ y[16] * s[1] * d * ((int8_t)((ql[16] & 0xF) | ((qh[16] & 0x03) << 4)) - 32)
|
||||||
|
+ y[32] * s[2] * d * ((int8_t)((ql[32] & 0xF) | ((qh[ 0] & 0x0c) << 2)) - 32)
|
||||||
|
+ y[48] * s[3] * d * ((int8_t)((ql[48] & 0xF) | ((qh[16] & 0x0c) << 2)) - 32)
|
||||||
|
+ y[64] * s[4] * d * ((int8_t)((ql[ 0] >> 4) | ((qh[ 0] & 0x30) >> 0)) - 32)
|
||||||
|
+ y[80] * s[5] * d * ((int8_t)((ql[16] >> 4) | ((qh[16] & 0x30) >> 0)) - 32)
|
||||||
|
+ y[96] * s[6] * d * ((int8_t)((ql[32] >> 4) | ((qh[ 0] & 0xc0) >> 2)) - 32)
|
||||||
|
+y[112] * s[7] * d * ((int8_t)((ql[48] >> 4) | ((qh[16] & 0xc0) >> 2)) - 32);
|
||||||
|
tmp += sum;
|
||||||
|
#else
|
||||||
|
float sum = 0;
|
||||||
|
for (int l = 0; l < 4; ++l) {
|
||||||
|
sum += y[l+ 0] * s[0] * d * ((int8_t)((ql[l+ 0] & 0xF) | (((qh[l] >> 0) & 3) << 4)) - 32)
|
||||||
|
+ y[l+32] * s[2] * d * ((int8_t)((ql[l+32] & 0xF) | (((qh[l] >> 2) & 3) << 4)) - 32)
|
||||||
|
+ y[l+64] * s[4] * d * ((int8_t)((ql[l+ 0] >> 4) | (((qh[l] >> 4) & 3) << 4)) - 32)
|
||||||
|
+ y[l+96] * s[6] * d * ((int8_t)((ql[l+32] >> 4) | (((qh[l] >> 6) & 3) << 4)) - 32);
|
||||||
|
}
|
||||||
|
tmp += sum;
|
||||||
|
#endif
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
|
||||||
|
if (tid == 0) {
|
||||||
|
dst[row] = tmp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ void convert_f16(const void * vx, const int64_t ib, const int iqs, dfloat2 & v){
|
||||||
|
const half * x = (const half *) vx;
|
||||||
|
|
||||||
|
// automatic half -> float type cast if dfloat == float
|
||||||
|
v.x = x[ib + iqs + 0];
|
||||||
|
v.y = x[ib + iqs + 1];
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr __device__ dequantize_kernel_t get_dequantize_kernel(ggml_type type) {
|
||||||
|
return type == GGML_TYPE_Q4_0 ? dequantize_q4_0 :
|
||||||
|
type == GGML_TYPE_Q4_1 ? dequantize_q4_1 :
|
||||||
|
type == GGML_TYPE_Q5_0 ? dequantize_q5_0 :
|
||||||
|
type == GGML_TYPE_Q5_1 ? dequantize_q5_1 :
|
||||||
|
type == GGML_TYPE_Q8_0 ? dequantize_q8_0 :
|
||||||
|
type == GGML_TYPE_F16 ? convert_f16 :
|
||||||
|
nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <ggml_type type>
|
||||||
|
static __global__ void dequantize_mul_mat_vec(const void * __restrict__ vx, const dfloat * __restrict__ y, float * __restrict__ dst, const int ncols, const int nrows) {
|
||||||
|
constexpr int qk = ggml_cuda_type_traits<type>::qk; // quantized weights per x block
|
||||||
|
constexpr int qr = ggml_cuda_type_traits<type>::qr; // number of quantized weights per data value in x block
|
||||||
|
constexpr dequantize_kernel_t dequantize_kernel = get_dequantize_kernel(type);
|
||||||
|
|
||||||
|
const int64_t row = (int64_t)blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
|
||||||
|
if (row >= nrows) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int tid = threadIdx.x;
|
||||||
|
|
||||||
|
const int iter_stride = 2*GGML_CUDA_DMMV_X;
|
||||||
|
const int vals_per_iter = iter_stride / WARP_SIZE; // num quantized vals per thread and i iter
|
||||||
|
const int y_offset = qr == 1 ? 1 : qk/2;
|
||||||
|
|
||||||
|
// partial sum for each thread
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
half2 tmp = {0.0f, 0.0f}; // two sums for f16 to take advantage of half2 intrinsics
|
||||||
|
#else
|
||||||
|
float tmp = 0.0f;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
|
||||||
|
for (int i = 0; i < ncols; i += iter_stride) {
|
||||||
|
const int col = i + vals_per_iter*tid;
|
||||||
|
const int64_t ib = ((int64_t)row*ncols + col)/qk; // x block index
|
||||||
|
const int iqs = (col%qk)/qr; // x quant index
|
||||||
|
const int iybs = col - col%qk; // y block start index
|
||||||
|
|
||||||
|
// processing >2 values per i iter is faster for fast GPUs
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < vals_per_iter; j += 2) {
|
||||||
|
// process 2 vals per j iter
|
||||||
|
|
||||||
|
// dequantize
|
||||||
|
// for qr = 2 the iqs needs to increase by 1 per j iter because 2 weights per data val
|
||||||
|
dfloat2 v;
|
||||||
|
dequantize_kernel(vx, ib, iqs + j/qr, v);
|
||||||
|
|
||||||
|
// matrix multiplication
|
||||||
|
// for qr = 2 the y index needs to increase by 1 per j iter because of y_offset = qk/2
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
tmp += __hmul2(v, {
|
||||||
|
y[iybs + iqs + j/qr + 0],
|
||||||
|
y[iybs + iqs + j/qr + y_offset]
|
||||||
|
});
|
||||||
|
#else
|
||||||
|
tmp += v.x * y[iybs + iqs + j/qr + 0];
|
||||||
|
tmp += v.y * y[iybs + iqs + j/qr + y_offset];
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
|
||||||
|
if (tid == 0) {
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
dst[row] = tmp.x + tmp.y;
|
||||||
|
#else
|
||||||
|
dst[row] = tmp;
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q4_0_cuda(const void * vx, const dfloat * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % (GGML_CUDA_DMMV_X*2) == 0);
|
||||||
|
const int block_num_y = (nrows + GGML_CUDA_MMV_Y - 1) / GGML_CUDA_MMV_Y;
|
||||||
|
// the number of rows may exceed maximum grid size in the y or z dimensions, use the x dimension instead
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, GGML_CUDA_MMV_Y, 1);
|
||||||
|
dequantize_mul_mat_vec<GGML_TYPE_Q4_0>
|
||||||
|
<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q4_1_cuda(const void * vx, const dfloat * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % (GGML_CUDA_DMMV_X*2) == 0);
|
||||||
|
const int block_num_y = (nrows + GGML_CUDA_MMV_Y - 1) / GGML_CUDA_MMV_Y;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, GGML_CUDA_MMV_Y, 1);
|
||||||
|
dequantize_mul_mat_vec<GGML_TYPE_Q4_1>
|
||||||
|
<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q5_0_cuda(const void * vx, const dfloat * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % (GGML_CUDA_DMMV_X*2) == 0);
|
||||||
|
const int block_num_y = (nrows + GGML_CUDA_MMV_Y - 1) / GGML_CUDA_MMV_Y;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, GGML_CUDA_MMV_Y, 1);
|
||||||
|
dequantize_mul_mat_vec<GGML_TYPE_Q5_0>
|
||||||
|
<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q5_1_cuda(const void * vx, const dfloat * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % (GGML_CUDA_DMMV_X*2) == 0);
|
||||||
|
const int block_num_y = (nrows + GGML_CUDA_MMV_Y - 1) / GGML_CUDA_MMV_Y;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, GGML_CUDA_MMV_Y, 1);
|
||||||
|
dequantize_mul_mat_vec<GGML_TYPE_Q5_1>
|
||||||
|
<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q8_0_cuda(const void * vx, const dfloat * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % (GGML_CUDA_DMMV_X*2) == 0);
|
||||||
|
const int block_num_y = (nrows + GGML_CUDA_MMV_Y - 1) / GGML_CUDA_MMV_Y;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, GGML_CUDA_MMV_Y, 1);
|
||||||
|
dequantize_mul_mat_vec<GGML_TYPE_Q8_0>
|
||||||
|
<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q2_K_cuda(const void * vx, const float * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % QK_K == 0);
|
||||||
|
const int ny = 2; // very slightly faster than 1 even when K_QUANTS_PER_ITERATION = 2
|
||||||
|
const int block_num_y = (nrows + ny - 1) / ny;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(32, ny, 1);
|
||||||
|
dequantize_mul_mat_vec_q2_k<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q3_K_cuda(const void * vx, const float * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % QK_K == 0);
|
||||||
|
const int ny = 2 / K_QUANTS_PER_ITERATION;
|
||||||
|
const int block_num_y = (nrows + ny - 1) / ny;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(32, ny, 1);
|
||||||
|
dequantize_mul_mat_vec_q3_k<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q4_K_cuda(const void * vx, const float * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % QK_K == 0);
|
||||||
|
const int ny = 2 / K_QUANTS_PER_ITERATION;
|
||||||
|
const int block_num_y = (nrows + ny - 1) / ny;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(32, ny, 1);
|
||||||
|
dequantize_mul_mat_vec_q4_k<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q5_K_cuda(const void * vx, const float * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % QK_K == 0);
|
||||||
|
const dim3 block_dims(32, 1, 1);
|
||||||
|
dequantize_mul_mat_vec_q5_k<<<nrows, block_dims, 0, stream>>>(vx, y, dst, ncols);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dequantize_mul_mat_vec_q6_K_cuda(const void * vx, const float * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % QK_K == 0);
|
||||||
|
const int ny = 2 / K_QUANTS_PER_ITERATION;
|
||||||
|
const int block_num_y = (nrows + ny - 1) / ny;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(32, ny, 1);
|
||||||
|
dequantize_mul_mat_vec_q6_k<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void convert_mul_mat_vec_f16_cuda(const void * vx, const dfloat * y, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % (GGML_CUDA_DMMV_X*2) == 0);
|
||||||
|
const int block_num_y = (nrows + GGML_CUDA_MMV_Y - 1) / GGML_CUDA_MMV_Y;
|
||||||
|
const dim3 block_nums(block_num_y, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, GGML_CUDA_MMV_Y, 1);
|
||||||
|
dequantize_mul_mat_vec<GGML_TYPE_F16>
|
||||||
|
<<<block_nums, block_dims, 0, stream>>>(vx, y, dst, ncols, nrows);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_dequantize_mul_mat_vec(
|
||||||
|
ggml_backend_cuda_context & ctx,
|
||||||
|
const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst, const char * src0_dd_i, const float * src1_ddf_i,
|
||||||
|
const char * src1_ddq_i, float * dst_dd_i, const int64_t row_low, const int64_t row_high, const int64_t src1_ncols,
|
||||||
|
const int64_t src1_padded_row_size, cudaStream_t stream) {
|
||||||
|
GGML_UNUSED(ctx);
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t row_diff = row_high - row_low;
|
||||||
|
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
// on some GPUs it is faster to convert src1 to half and to use half precision intrinsics
|
||||||
|
#ifdef GGML_CUDA_F16
|
||||||
|
ggml_cuda_pool_alloc<half> src1_dfloat_a(ctx.pool());
|
||||||
|
half * src1_dfloat = nullptr; // dfloat == half
|
||||||
|
|
||||||
|
bool src1_convert_f16 =
|
||||||
|
src0->type == GGML_TYPE_Q4_0 || src0->type == GGML_TYPE_Q4_1 ||
|
||||||
|
src0->type == GGML_TYPE_Q5_0 || src0->type == GGML_TYPE_Q5_1 ||
|
||||||
|
src0->type == GGML_TYPE_Q8_0 || src0->type == GGML_TYPE_F16;
|
||||||
|
|
||||||
|
if (src1_convert_f16) {
|
||||||
|
src1_dfloat = src1_dfloat_a.alloc(ne00);
|
||||||
|
const to_fp16_cuda_t to_fp16_cuda = ggml_get_to_fp16_cuda(src1->type);
|
||||||
|
GGML_ASSERT(to_fp16_cuda != nullptr);
|
||||||
|
to_fp16_cuda(src1_ddf_i, src1_dfloat, ne00, stream);
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
const dfloat * src1_dfloat = (const dfloat *) src1_ddf_i; // dfloat == float, no conversion
|
||||||
|
#endif // GGML_CUDA_F16
|
||||||
|
|
||||||
|
switch (src0->type) {
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
dequantize_mul_mat_vec_q4_0_cuda(src0_dd_i, src1_dfloat, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
dequantize_mul_mat_vec_q4_1_cuda(src0_dd_i, src1_dfloat, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
dequantize_mul_mat_vec_q5_0_cuda(src0_dd_i, src1_dfloat, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
dequantize_mul_mat_vec_q5_1_cuda(src0_dd_i, src1_dfloat, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
dequantize_mul_mat_vec_q8_0_cuda(src0_dd_i, src1_dfloat, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q2_K:
|
||||||
|
dequantize_mul_mat_vec_q2_K_cuda(src0_dd_i, src1_ddf_i, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q3_K:
|
||||||
|
dequantize_mul_mat_vec_q3_K_cuda(src0_dd_i, src1_ddf_i, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_K:
|
||||||
|
dequantize_mul_mat_vec_q4_K_cuda(src0_dd_i, src1_ddf_i, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_K:
|
||||||
|
dequantize_mul_mat_vec_q5_K_cuda(src0_dd_i, src1_ddf_i, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q6_K:
|
||||||
|
dequantize_mul_mat_vec_q6_K_cuda(src0_dd_i, src1_ddf_i, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_F16:
|
||||||
|
convert_mul_mat_vec_f16_cuda(src0_dd_i, src1_dfloat, dst_dd_i, ne00, row_diff, stream);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_UNUSED(src1);
|
||||||
|
GGML_UNUSED(dst);
|
||||||
|
GGML_UNUSED(src1_ddq_i);
|
||||||
|
GGML_UNUSED(src1_ncols);
|
||||||
|
GGML_UNUSED(src1_padded_row_size);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool ggml_cuda_dmmv_type_supported(ggml_type src0_type) {
|
||||||
|
return src0_type == GGML_TYPE_Q4_0 || src0_type == GGML_TYPE_Q4_1 ||
|
||||||
|
src0_type == GGML_TYPE_Q5_0 || src0_type == GGML_TYPE_Q5_1 ||
|
||||||
|
src0_type == GGML_TYPE_Q8_0 || src0_type == GGML_TYPE_Q2_K ||
|
||||||
|
src0_type == GGML_TYPE_Q3_K || src0_type == GGML_TYPE_Q4_K ||
|
||||||
|
src0_type == GGML_TYPE_Q5_K || src0_type == GGML_TYPE_Q6_K ||
|
||||||
|
src0_type == GGML_TYPE_F16;
|
||||||
|
}
|
46
llama/ggml-cuda/dmmv.cuh
Normal file
46
llama/ggml-cuda/dmmv.cuh
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
// dmmv = dequantize_mul_mat_vec
|
||||||
|
|
||||||
|
// TODO: remove this?
|
||||||
|
#ifndef GGML_CUDA_DMMV_X
|
||||||
|
#define GGML_CUDA_DMMV_X 32
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#ifndef GGML_CUDA_MMV_Y
|
||||||
|
#define GGML_CUDA_MMV_Y 1
|
||||||
|
#endif
|
||||||
|
|
||||||
|
void ggml_cuda_op_dequantize_mul_mat_vec(
|
||||||
|
ggml_backend_cuda_context & ctx,
|
||||||
|
const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst, const char * src0_dd_i, const float * src1_ddf_i,
|
||||||
|
const char * src1_ddq_i, float * dst_dd_i, const int64_t row_low, const int64_t row_high, const int64_t src1_ncols,
|
||||||
|
const int64_t src1_padded_row_size, cudaStream_t stream);
|
||||||
|
|
||||||
|
bool ggml_cuda_dmmv_type_supported(ggml_type src0_type);
|
734
llama/ggml-cuda/fattn-common.cuh
Normal file
734
llama/ggml-cuda/fattn-common.cuh
Normal file
|
@ -0,0 +1,734 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "convert.cuh"
|
||||||
|
#include "vecdotq.cuh"
|
||||||
|
|
||||||
|
#include <cstdint>
|
||||||
|
|
||||||
|
#define FATTN_KQ_STRIDE 256
|
||||||
|
#define HALF_MAX_HALF __float2half(65504.0f/2) // Use neg. of this instead of -INFINITY to initialize KQ max vals to avoid NaN upon subtraction.
|
||||||
|
#define SOFTMAX_FTZ_THRESHOLD -20.0f // Softmax exp. of values smaller than this are flushed to zero to avoid NaNs.
|
||||||
|
|
||||||
|
typedef void (* fattn_kernel_t)(
|
||||||
|
const char * __restrict__ Q,
|
||||||
|
const char * __restrict__ K,
|
||||||
|
const char * __restrict__ V,
|
||||||
|
const char * __restrict__ mask,
|
||||||
|
float * __restrict__ dst,
|
||||||
|
float2 * __restrict__ dst_meta,
|
||||||
|
const float scale,
|
||||||
|
const float max_bias,
|
||||||
|
const float m0,
|
||||||
|
const float m1,
|
||||||
|
const uint32_t n_head_log2,
|
||||||
|
const float logit_softcap,
|
||||||
|
const int ne00,
|
||||||
|
const int ne01,
|
||||||
|
const int ne02,
|
||||||
|
const int ne03,
|
||||||
|
const int ne10,
|
||||||
|
const int ne11,
|
||||||
|
const int ne12,
|
||||||
|
const int ne13,
|
||||||
|
const int ne31,
|
||||||
|
const int nb31,
|
||||||
|
const int nb01,
|
||||||
|
const int nb02,
|
||||||
|
const int nb03,
|
||||||
|
const int nb11,
|
||||||
|
const int nb12,
|
||||||
|
const int nb13,
|
||||||
|
const int nb21,
|
||||||
|
const int nb22,
|
||||||
|
const int nb23,
|
||||||
|
const int ne0,
|
||||||
|
const int ne1,
|
||||||
|
const int ne2,
|
||||||
|
const int ne3);
|
||||||
|
|
||||||
|
typedef half (*vec_dot_KQ_f16_t)(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8 , const void * __restrict__ Q_ds);
|
||||||
|
typedef float (*vec_dot_KQ_f32_t)(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8 , const void * __restrict__ Q_ds);
|
||||||
|
|
||||||
|
template<typename T, int D>
|
||||||
|
static __device__ __forceinline__ T vec_dot_fattn_vec_KQ_q4_0(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8, const void * __restrict__ Q_ds_v) {
|
||||||
|
|
||||||
|
const block_q4_0 * K_q4_0 = (const block_q4_0 *) K_c;
|
||||||
|
GGML_UNUSED(Q_v);
|
||||||
|
|
||||||
|
T sum = 0.0f;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/sizeof(int); k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const int ib = k_KQ / QI8_1;
|
||||||
|
const int iqs4 = k_KQ % QI4_0;
|
||||||
|
const int shift = k_KQ & (QI8_1/2);
|
||||||
|
|
||||||
|
const int v = (get_int_b2(K_q4_0[ib].qs, iqs4) >> shift) & 0x0F0F0F0F;
|
||||||
|
const int u = Q_q8[k_KQ_0/WARP_SIZE];
|
||||||
|
|
||||||
|
const int sumi = ggml_cuda_dp4a(v, u, 0);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
const half2 * Q_ds = (const half2 *) Q_ds_v;
|
||||||
|
|
||||||
|
const half2 sum2 = __half2half2(K_q4_0[ib].d) * Q_ds[k_KQ_0/WARP_SIZE];
|
||||||
|
sum += (T) (((half) sumi)*__low2half(sum2) - __high2half(sum2) /* *8/QI8_1 == 1 */);
|
||||||
|
} else
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
{
|
||||||
|
const float2 * Q_ds = (const float2 *) Q_ds_v;
|
||||||
|
|
||||||
|
sum += (T) (__half2float(K_q4_0[ib].d) * (sumi*Q_ds[k_KQ_0/WARP_SIZE].x - (8/QI8_1)*Q_ds[k_KQ_0/WARP_SIZE].y));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T, int D>
|
||||||
|
static __device__ __forceinline__ T vec_dot_fattn_vec_KQ_q4_1(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8, const void * __restrict__ Q_ds_v) {
|
||||||
|
|
||||||
|
const block_q4_1 * K_q4_1 = (const block_q4_1 *) K_c;
|
||||||
|
GGML_UNUSED(Q_v);
|
||||||
|
|
||||||
|
T sum = 0.0f;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/sizeof(int); k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const int ib = k_KQ / QI8_1;
|
||||||
|
const int iqs4 = k_KQ % QI4_1;
|
||||||
|
const int shift = k_KQ & (QI8_1/2);
|
||||||
|
|
||||||
|
const int v = (get_int_b4(K_q4_1[ib].qs, iqs4) >> shift) & 0x0F0F0F0F;
|
||||||
|
const int u = Q_q8[k_KQ_0/WARP_SIZE];
|
||||||
|
|
||||||
|
const int sumi = ggml_cuda_dp4a(v, u, 0);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
const half2 * Q_ds = (const half2 *) Q_ds_v;
|
||||||
|
|
||||||
|
const half2 d4d8_m4s8 = K_q4_1[ib].dm * Q_ds[k_KQ_0/WARP_SIZE];
|
||||||
|
const half2 sumid4d8_m4s8scaled = d4d8_m4s8 * make_half2(sumi, 1.0f/QI8_1);
|
||||||
|
sum += (T) (__low2half(sumid4d8_m4s8scaled) + __high2half(sumid4d8_m4s8scaled));
|
||||||
|
} else
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
{
|
||||||
|
const float2 * Q_ds = (const float2 *) Q_ds_v;
|
||||||
|
|
||||||
|
const float sumid4d8 = __low2float(K_q4_1[ib].dm)*Q_ds[k_KQ_0/WARP_SIZE].x * sumi;
|
||||||
|
const float m4s8scaled = __high2float(K_q4_1[ib].dm)*Q_ds[k_KQ_0/WARP_SIZE].y / QI8_1;
|
||||||
|
|
||||||
|
sum += (T) (sumid4d8 + m4s8scaled);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T, int D>
|
||||||
|
static __device__ __forceinline__ T vec_dot_fattn_vec_KQ_q5_0(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8, const void * __restrict__ Q_ds_v) {
|
||||||
|
|
||||||
|
const block_q5_0 * K_q5_0 = (const block_q5_0 *) K_c;
|
||||||
|
GGML_UNUSED(Q_v);
|
||||||
|
|
||||||
|
T sum = 0.0f;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/sizeof(int); k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const int ib = k_KQ / QI8_1;
|
||||||
|
const int iqs4 = k_KQ % QI5_0;
|
||||||
|
const int iqs8 = k_KQ % QI8_1;
|
||||||
|
const int shift = k_KQ & (QI8_1/2);
|
||||||
|
|
||||||
|
int v = (get_int_b2(K_q5_0[ib].qs, iqs4) >> shift) & 0x0F0F0F0F;
|
||||||
|
const int vh = get_int_b2(K_q5_0[ib].qh, 0) >> (iqs8 * QI5_0);
|
||||||
|
v |= (vh << 4) & 0x00000010; // 0 -> 4
|
||||||
|
v |= (vh << 11) & 0x00001000; // 1 -> 12
|
||||||
|
v |= (vh << 18) & 0x00100000; // 2 -> 20
|
||||||
|
v |= (vh << 25) & 0x10000000; // 3 -> 28
|
||||||
|
|
||||||
|
const int u = Q_q8[k_KQ_0/WARP_SIZE];
|
||||||
|
|
||||||
|
const int sumi = ggml_cuda_dp4a(v, u, 0);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
const half2 * Q_ds = (const half2 *) Q_ds_v;
|
||||||
|
|
||||||
|
const half2 sum2 = __half2half2(K_q5_0[ib].d) * Q_ds[k_KQ_0/WARP_SIZE];
|
||||||
|
sum += (T) (((half) sumi)*__low2half(sum2) - __high2half(sum2)*__float2half(2.0f)) /* *16/QI8_1 == 2 */;
|
||||||
|
} else
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
{
|
||||||
|
const float2 * Q_ds = (const float2 *) Q_ds_v;
|
||||||
|
|
||||||
|
sum += (T) (__half2float(K_q5_0[ib].d) * (sumi*Q_ds[k_KQ_0/WARP_SIZE].x - (16/QI8_1)*Q_ds[k_KQ_0/WARP_SIZE].y));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T, int D>
|
||||||
|
static __device__ __forceinline__ T vec_dot_fattn_vec_KQ_q5_1(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8, const void * __restrict__ Q_ds_v) {
|
||||||
|
|
||||||
|
const block_q5_1 * K_q5_1 = (const block_q5_1 *) K_c;
|
||||||
|
GGML_UNUSED(Q_v);
|
||||||
|
|
||||||
|
T sum = 0.0f;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/sizeof(int); k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const int ib = k_KQ / QI8_1;
|
||||||
|
const int iqs4 = k_KQ % QI5_1;
|
||||||
|
const int iqs8 = k_KQ % QI8_1;
|
||||||
|
const int shift = k_KQ & (QI8_1/2);
|
||||||
|
|
||||||
|
int v = (get_int_b2(K_q5_1[ib].qs, iqs4) >> shift) & 0x0F0F0F0F;
|
||||||
|
const int vh = get_int_b2(K_q5_1[ib].qh, 0) >> (iqs8 * QI5_1);
|
||||||
|
v |= (vh << 4) & 0x00000010; // 0 -> 4
|
||||||
|
v |= (vh << 11) & 0x00001000; // 1 -> 12
|
||||||
|
v |= (vh << 18) & 0x00100000; // 2 -> 20
|
||||||
|
v |= (vh << 25) & 0x10000000; // 3 -> 28
|
||||||
|
|
||||||
|
const int u = Q_q8[k_KQ_0/WARP_SIZE];
|
||||||
|
|
||||||
|
const int sumi = ggml_cuda_dp4a(v, u, 0);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
const half2 * Q_ds = (const half2 *) Q_ds_v;
|
||||||
|
|
||||||
|
const half2 d5d8_m5s8 = K_q5_1[ib].dm * Q_ds[k_KQ_0/WARP_SIZE];
|
||||||
|
const half2 sumid5d8_m5s8scaled = d5d8_m5s8 * make_half2(sumi, 1.0f/QI8_1);
|
||||||
|
sum += (T) (__low2half(sumid5d8_m5s8scaled) + __high2half(sumid5d8_m5s8scaled));
|
||||||
|
} else
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
{
|
||||||
|
const float2 * Q_ds = (const float2 *) Q_ds_v;
|
||||||
|
|
||||||
|
const float sumid5d8 = __low2float(K_q5_1[ib].dm)*Q_ds[k_KQ_0/WARP_SIZE].x * sumi;
|
||||||
|
const float m5s8scaled = __high2float(K_q5_1[ib].dm)*Q_ds[k_KQ_0/WARP_SIZE].y / QI8_1;
|
||||||
|
|
||||||
|
sum += (T) (sumid5d8 + m5s8scaled);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T, int D>
|
||||||
|
static __device__ __forceinline__ T vec_dot_fattn_vec_KQ_q8_0(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8, const void * __restrict__ Q_ds_v) {
|
||||||
|
|
||||||
|
const block_q8_0 * K_q8_0 = (const block_q8_0 *) K_c;
|
||||||
|
GGML_UNUSED(Q_v);
|
||||||
|
|
||||||
|
T sum = 0.0f;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/sizeof(int); k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const int ib = k_KQ / QI8_0;
|
||||||
|
const int iqs = k_KQ % QI8_0;
|
||||||
|
|
||||||
|
const int v = get_int_b2(K_q8_0[ib].qs, iqs);
|
||||||
|
|
||||||
|
T Q_d;
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
const half2 * Q_ds = (const half2 *) Q_ds_v;
|
||||||
|
Q_d = __low2half(Q_ds[k_KQ_0/WARP_SIZE]);
|
||||||
|
} else {
|
||||||
|
const float2 * Q_ds = (const float2 *) Q_ds_v;
|
||||||
|
Q_d = Q_ds[k_KQ_0/WARP_SIZE].x;
|
||||||
|
}
|
||||||
|
|
||||||
|
sum += vec_dot_q8_0_q8_1_impl<T, 1>(&v, &Q_q8[k_KQ_0/WARP_SIZE], K_q8_0[ib].d, Q_d);
|
||||||
|
}
|
||||||
|
|
||||||
|
return sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T, int D>
|
||||||
|
static __device__ __forceinline__ T vec_dot_fattn_vec_KQ_f16(
|
||||||
|
const char * __restrict__ K_c, const void * __restrict__ Q_v, const int * __restrict__ Q_q8 , const void * __restrict__ Q_ds_v) {
|
||||||
|
|
||||||
|
const half2 * K_h2 = (const half2 *) K_c;
|
||||||
|
GGML_UNUSED(Q_q8);
|
||||||
|
GGML_UNUSED(Q_ds_v);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
const half2 * Q_h2 = (const half2 *) Q_v;
|
||||||
|
|
||||||
|
half2 sum2 = make_half2(0.0f, 0.0f);
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/2; k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const half2 K_ik = K_h2[k_KQ];
|
||||||
|
sum2 += K_ik * Q_h2[k_KQ_0/WARP_SIZE];
|
||||||
|
}
|
||||||
|
|
||||||
|
return __low2half(sum2) + __high2half(sum2);
|
||||||
|
}
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
|
||||||
|
const float2 * Q_f2 = (const float2 *) Q_v;
|
||||||
|
|
||||||
|
float sum = 0.0f;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/2; k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
const half2 K_ik = K_h2[k_KQ];
|
||||||
|
sum += __low2float(K_ik) * Q_f2[k_KQ_0/WARP_SIZE].x;
|
||||||
|
sum += __high2float(K_ik) * Q_f2[k_KQ_0/WARP_SIZE].y;
|
||||||
|
}
|
||||||
|
|
||||||
|
return sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename Tds>
|
||||||
|
static __device__ __forceinline__ void quantize_q8_1_to_shared(
|
||||||
|
const float * __restrict__ x, const float scale, int * __restrict__ yq32, void * __restrict__ yds) {
|
||||||
|
|
||||||
|
float vals[sizeof(int)] = {0.0f};
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < sizeof(int); ++l) {
|
||||||
|
vals[l] = scale * x[4*threadIdx.x + l];
|
||||||
|
}
|
||||||
|
|
||||||
|
float amax = fabsf(vals[0]);
|
||||||
|
float sum = vals[0];
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 1; l < sizeof(int); ++l) {
|
||||||
|
amax = fmaxf(amax, fabsf(vals[l]));
|
||||||
|
sum += vals[l];
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = QI8_1/2; mask > 0; mask >>= 1) {
|
||||||
|
amax = fmaxf(amax, __shfl_xor_sync(0xFFFFFFFF, amax, mask, 32));
|
||||||
|
sum += __shfl_xor_sync(0xFFFFFFFF, sum, mask, 32);
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = amax / 127;
|
||||||
|
int q32 = 0;
|
||||||
|
int8_t * q8 = (int8_t *) &q32;
|
||||||
|
|
||||||
|
if (d != 0.0f) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < sizeof(int); ++l) {
|
||||||
|
q8[l] = roundf(vals[l] / d);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
yq32[threadIdx.x] = q32;
|
||||||
|
if (threadIdx.x % QI8_1 == 0) {
|
||||||
|
if (std::is_same<Tds, half2>::value) {
|
||||||
|
((half2 *) yds)[threadIdx.x/QI8_1] = make_half2(d, sum);
|
||||||
|
} else {
|
||||||
|
((float2 *) yds)[threadIdx.x/QI8_1] = make_float2(d, sum);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
typedef half (*dequantize_1_f16_t)(const void *, const int64_t);
|
||||||
|
typedef float (*dequantize_1_f32_t)(const void *, const int64_t);
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ T dequantize_1_q4_0(const void * __restrict__ vx, const int64_t i) {
|
||||||
|
const block_q4_0 * x = (const block_q4_0 *) vx;
|
||||||
|
|
||||||
|
const int64_t ib = i / QK4_0;
|
||||||
|
const int iqs = i % (QK4_0/2);
|
||||||
|
const int shift = (i % QK4_0) / (QK4_0/2);
|
||||||
|
|
||||||
|
const T d = x[ib].d;
|
||||||
|
const int q0 = x[ib].qs[iqs];
|
||||||
|
const int q = ((q0 >> (4*shift)) & 0x0F) - 8;
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
return ((half) d)*((half) q);
|
||||||
|
}
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
|
||||||
|
return ((float) d)*((float) q);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ T dequantize_1_q4_1(const void * __restrict__ vx, const int64_t i) {
|
||||||
|
const block_q4_1 * x = (const block_q4_1 *) vx;
|
||||||
|
|
||||||
|
const int64_t ib = i / QK4_1;
|
||||||
|
const int iqs = i % (QK4_1/2);
|
||||||
|
const int shift = (i % QK4_1) / (QK4_1/2);
|
||||||
|
|
||||||
|
const half2 dm = x[ib].dm;
|
||||||
|
const int q0 = x[ib].qs[iqs];
|
||||||
|
const int q = ((q0 >> (4*shift)) & 0x0F);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
return __low2half(dm)*((half) q) + __high2half(dm);
|
||||||
|
}
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
|
||||||
|
return __low2float(dm)*((float) q) + __high2float(dm);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ T dequantize_1_q5_0(const void * __restrict__ vx, const int64_t i) {
|
||||||
|
const block_q5_0 * x = (const block_q5_0 *) vx;
|
||||||
|
|
||||||
|
const int64_t ib = i / QK5_0;
|
||||||
|
const int idq = i % QK5_0;
|
||||||
|
const int iqs = i % (QK5_0/2);
|
||||||
|
const int shift = (i % QK5_0) / (QK5_0/2);
|
||||||
|
|
||||||
|
const T d = x[ib].d;
|
||||||
|
const int ql0 = x[ib].qs[iqs];
|
||||||
|
const int qh0 = get_int_b2(x[ib].qh, 0);
|
||||||
|
const int ql = ((ql0 >> (4*shift)) & 0x0F);
|
||||||
|
const int qh = ((qh0 >> idq) << 4) & 0x10;
|
||||||
|
const int q = (ql | qh) - 16;
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
return ((half) d)*((half) q);
|
||||||
|
}
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
|
||||||
|
return ((float) d)*((float) q);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ T dequantize_1_q5_1(const void * __restrict__ vx, const int64_t i) {
|
||||||
|
const block_q5_1 * x = (const block_q5_1 *) vx;
|
||||||
|
|
||||||
|
const int64_t ib = i / QK5_1;
|
||||||
|
const int idq = i % QK5_1;
|
||||||
|
const int iqs = i % (QK5_1/2);
|
||||||
|
const int shift = (i % QK5_1) / (QK5_1/2);
|
||||||
|
|
||||||
|
const half2 dm = x[ib].dm;
|
||||||
|
const int ql0 = x[ib].qs[iqs];
|
||||||
|
const int qh0 = get_int_b4(x[ib].qh, 0);
|
||||||
|
const int ql = ((ql0 >> (4*shift)) & 0x0F);
|
||||||
|
const int qh = ((qh0 >> idq) << 4) & 0x10;
|
||||||
|
const int q = (ql | qh);
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
return __low2half(dm)*((half) q) + __high2half(dm);
|
||||||
|
}
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
|
||||||
|
return __low2float(dm)*((float) q) + __high2float(dm);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ T dequantize_1_q8_0(const void * __restrict__ vx, const int64_t i) {
|
||||||
|
const block_q8_0 * x = (const block_q8_0 *) vx;
|
||||||
|
|
||||||
|
const int64_t ib = i / QK8_0;
|
||||||
|
const int iqs = i % QK8_0;
|
||||||
|
|
||||||
|
const T d = x[ib].d;
|
||||||
|
const int q = x[ib].qs[iqs];
|
||||||
|
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
if (std::is_same<T, half>::value) {
|
||||||
|
return ((half) d)*((half) q);
|
||||||
|
}
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
|
||||||
|
return ((float) d)*((float) q);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ T dequantize_1_f16(const void * __restrict__ vx, const int64_t i) {
|
||||||
|
const half * x = (const half *) vx;
|
||||||
|
|
||||||
|
return x[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D>
|
||||||
|
constexpr __device__ vec_dot_KQ_f16_t get_vec_dot_KQ_f16(ggml_type type_K) {
|
||||||
|
return type_K == GGML_TYPE_Q4_0 ? vec_dot_fattn_vec_KQ_q4_0<half, D> :
|
||||||
|
type_K == GGML_TYPE_Q4_1 ? vec_dot_fattn_vec_KQ_q4_1<half, D> :
|
||||||
|
type_K == GGML_TYPE_Q5_0 ? vec_dot_fattn_vec_KQ_q5_0<half, D> :
|
||||||
|
type_K == GGML_TYPE_Q5_1 ? vec_dot_fattn_vec_KQ_q5_1<half, D> :
|
||||||
|
type_K == GGML_TYPE_Q8_0 ? vec_dot_fattn_vec_KQ_q8_0<half, D> :
|
||||||
|
type_K == GGML_TYPE_F16 ? vec_dot_fattn_vec_KQ_f16<half, D> :
|
||||||
|
nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D>
|
||||||
|
constexpr __device__ vec_dot_KQ_f32_t get_vec_dot_KQ_f32(ggml_type type_K) {
|
||||||
|
return type_K == GGML_TYPE_Q4_0 ? vec_dot_fattn_vec_KQ_q4_0<float, D> :
|
||||||
|
type_K == GGML_TYPE_Q4_1 ? vec_dot_fattn_vec_KQ_q4_1<float, D> :
|
||||||
|
type_K == GGML_TYPE_Q5_0 ? vec_dot_fattn_vec_KQ_q5_0<float, D> :
|
||||||
|
type_K == GGML_TYPE_Q5_1 ? vec_dot_fattn_vec_KQ_q5_1<float, D> :
|
||||||
|
type_K == GGML_TYPE_Q8_0 ? vec_dot_fattn_vec_KQ_q8_0<float, D> :
|
||||||
|
type_K == GGML_TYPE_F16 ? vec_dot_fattn_vec_KQ_f16<float, D> :
|
||||||
|
nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr __device__ dequantize_1_f16_t get_dequantize_1_f16(ggml_type type_V) {
|
||||||
|
return type_V == GGML_TYPE_Q4_0 ? dequantize_1_q4_0<half> :
|
||||||
|
type_V == GGML_TYPE_Q4_1 ? dequantize_1_q4_1<half> :
|
||||||
|
type_V == GGML_TYPE_Q5_0 ? dequantize_1_q5_0<half> :
|
||||||
|
type_V == GGML_TYPE_Q5_1 ? dequantize_1_q5_1<half> :
|
||||||
|
type_V == GGML_TYPE_Q8_0 ? dequantize_1_q8_0<half> :
|
||||||
|
type_V == GGML_TYPE_F16 ? dequantize_1_f16<half> :
|
||||||
|
nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr __device__ dequantize_1_f32_t get_dequantize_1_f32(ggml_type type_V) {
|
||||||
|
return type_V == GGML_TYPE_Q4_0 ? dequantize_1_q4_0<float> :
|
||||||
|
type_V == GGML_TYPE_Q4_1 ? dequantize_1_q4_1<float> :
|
||||||
|
type_V == GGML_TYPE_Q5_0 ? dequantize_1_q5_0<float> :
|
||||||
|
type_V == GGML_TYPE_Q5_1 ? dequantize_1_q5_1<float> :
|
||||||
|
type_V == GGML_TYPE_Q8_0 ? dequantize_1_q8_0<float> :
|
||||||
|
type_V == GGML_TYPE_F16 ? dequantize_1_f16<float> :
|
||||||
|
nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<int D, int parallel_blocks> // D == head size
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
__launch_bounds__(D, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void flash_attn_combine_results(
|
||||||
|
const float * __restrict__ VKQ_parts,
|
||||||
|
const float2 * __restrict__ VKQ_meta,
|
||||||
|
float * __restrict__ dst) {
|
||||||
|
VKQ_parts += parallel_blocks*D * gridDim.y*blockIdx.x;
|
||||||
|
VKQ_meta += parallel_blocks * gridDim.y*blockIdx.x;
|
||||||
|
dst += D * gridDim.y*blockIdx.x;
|
||||||
|
|
||||||
|
const int tid = threadIdx.x;
|
||||||
|
__builtin_assume(tid < D);
|
||||||
|
|
||||||
|
__shared__ float2 meta[parallel_blocks];
|
||||||
|
if (tid < 2*parallel_blocks) {
|
||||||
|
((float *) meta)[threadIdx.x] = ((const float *)VKQ_meta) [blockIdx.y*(2*parallel_blocks) + tid];
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
float kqmax = meta[0].x;
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 1; l < parallel_blocks; ++l) {
|
||||||
|
kqmax = max(kqmax, meta[l].x);
|
||||||
|
}
|
||||||
|
|
||||||
|
float VKQ_numerator = 0.0f;
|
||||||
|
float VKQ_denominator = 0.0f;
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < parallel_blocks; ++l) {
|
||||||
|
const float diff = meta[l].x - kqmax;
|
||||||
|
const float KQ_max_scale = expf(diff);
|
||||||
|
const uint32_t ftz_mask = 0xFFFFFFFF * (diff > SOFTMAX_FTZ_THRESHOLD);
|
||||||
|
*((uint32_t *) &KQ_max_scale) &= ftz_mask;
|
||||||
|
|
||||||
|
VKQ_numerator += KQ_max_scale * VKQ_parts[l*gridDim.y*D + blockIdx.y*D + tid];
|
||||||
|
VKQ_denominator += KQ_max_scale * meta[l].y;
|
||||||
|
}
|
||||||
|
|
||||||
|
dst[blockIdx.y*D + tid] = VKQ_numerator / VKQ_denominator;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void on_no_fattn_vec_case(const int D) {
|
||||||
|
if (D == 64) {
|
||||||
|
fprintf(stderr, "Unsupported KV type combination for head_size 64.\n");
|
||||||
|
fprintf(stderr, "By default only f16 KV cache is supported.\n");
|
||||||
|
fprintf(stderr, "Compile with GGML_CUDA_FA_ALL_QUANTS for V cache quantization support.\n");
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
} else if (D == 128) {
|
||||||
|
fprintf(stderr, "Unsupported KV type combination for head_size 128.\n");
|
||||||
|
fprintf(stderr, "Supported combinations:\n");
|
||||||
|
fprintf(stderr, " - K == q4_0, V == q4_0, 4.50 BPV\n");
|
||||||
|
fprintf(stderr, " - K == q8_0, V == q8_0, 8.50 BPV\n");
|
||||||
|
fprintf(stderr, " - K == f16, V == f16, 16.00 BPV\n");
|
||||||
|
fprintf(stderr, "Compile with GGML_CUDA_FA_ALL_QUANTS for all combinations of q4_0, q4_1, q5_0, q5_1, q8_0, and f16.\n");
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
} else {
|
||||||
|
fprintf(stderr, "Unsupported KV type combination for head_size 256.\n");
|
||||||
|
fprintf(stderr, "Only f16 is supported.\n");
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D, int parallel_blocks>
|
||||||
|
void launch_fattn(
|
||||||
|
ggml_backend_cuda_context & ctx, ggml_tensor * dst, fattn_kernel_t fattn_kernel,
|
||||||
|
const int nwarps, const int cols_per_block, const bool need_f16_K, const bool need_f16_V
|
||||||
|
) {
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
const ggml_tensor * K = dst->src[1];
|
||||||
|
const ggml_tensor * V = dst->src[2];
|
||||||
|
|
||||||
|
const ggml_tensor * mask = dst->src[3];
|
||||||
|
|
||||||
|
ggml_tensor * KQV = dst;
|
||||||
|
|
||||||
|
GGML_ASSERT(Q->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(KQV->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
GGML_ASSERT(!mask || mask->type == GGML_TYPE_F16);
|
||||||
|
GGML_ASSERT(!mask || mask->ne[1] >= GGML_PAD(Q->ne[1], 16) &&
|
||||||
|
"the Flash-Attention CUDA kernel requires the mask to be padded to 16 and at least n_queries big");
|
||||||
|
|
||||||
|
GGML_ASSERT(K->ne[1] % FATTN_KQ_STRIDE == 0 && "Incorrect KV cache padding.");
|
||||||
|
|
||||||
|
ggml_cuda_pool & pool = ctx.pool();
|
||||||
|
cudaStream_t main_stream = ctx.stream();
|
||||||
|
|
||||||
|
ggml_cuda_pool_alloc<half> K_f16(pool);
|
||||||
|
ggml_cuda_pool_alloc<half> V_f16(pool);
|
||||||
|
ggml_cuda_pool_alloc<float> dst_tmp(pool);
|
||||||
|
ggml_cuda_pool_alloc<float2> dst_tmp_meta(pool);
|
||||||
|
|
||||||
|
char * K_data = (char *) K->data;
|
||||||
|
size_t nb11 = K->nb[1];
|
||||||
|
size_t nb12 = K->nb[2];
|
||||||
|
size_t nb13 = K->nb[3];
|
||||||
|
|
||||||
|
char * V_data = (char *) V->data;
|
||||||
|
size_t nb21 = V->nb[1];
|
||||||
|
size_t nb22 = V->nb[2];
|
||||||
|
size_t nb23 = V->nb[3];
|
||||||
|
|
||||||
|
if (need_f16_K && K->type != GGML_TYPE_F16) {
|
||||||
|
K_f16.alloc(ggml_nelements(K));
|
||||||
|
to_fp16_cuda_t to_fp16 = ggml_get_to_fp16_cuda(K->type);
|
||||||
|
to_fp16(K_data, K_f16.ptr, ggml_nelements(K), main_stream);
|
||||||
|
K_data = (char *) K_f16.ptr;
|
||||||
|
|
||||||
|
const size_t bs = ggml_blck_size(K->type);
|
||||||
|
const size_t ts = ggml_type_size(K->type);
|
||||||
|
|
||||||
|
nb11 = nb11*bs*sizeof(half)/ts;
|
||||||
|
nb12 = nb12*bs*sizeof(half)/ts;
|
||||||
|
nb13 = nb13*bs*sizeof(half)/ts;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (need_f16_V && V->type != GGML_TYPE_F16) {
|
||||||
|
V_f16.alloc(ggml_nelements(V));
|
||||||
|
to_fp16_cuda_t to_fp16 = ggml_get_to_fp16_cuda(V->type);
|
||||||
|
to_fp16(V_data, V_f16.ptr, ggml_nelements(V), main_stream);
|
||||||
|
V_data = (char *) V_f16.ptr;
|
||||||
|
|
||||||
|
const size_t bs = ggml_blck_size(V->type);
|
||||||
|
const size_t ts = ggml_type_size(V->type);
|
||||||
|
|
||||||
|
nb21 = nb21*bs*sizeof(half)/ts;
|
||||||
|
nb22 = nb22*bs*sizeof(half)/ts;
|
||||||
|
nb23 = nb23*bs*sizeof(half)/ts;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parallel_blocks > 1) {
|
||||||
|
dst_tmp.alloc(parallel_blocks*ggml_nelements(KQV));
|
||||||
|
dst_tmp_meta.alloc(parallel_blocks*ggml_nrows(KQV));
|
||||||
|
}
|
||||||
|
|
||||||
|
const dim3 block_dim(WARP_SIZE, nwarps, 1);
|
||||||
|
const dim3 blocks_num(parallel_blocks*((Q->ne[1] + cols_per_block - 1) / cols_per_block), Q->ne[2], Q->ne[3]);
|
||||||
|
const int shmem = 0;
|
||||||
|
|
||||||
|
float scale = 1.0f;
|
||||||
|
float max_bias = 0.0f;
|
||||||
|
float logit_softcap = 0.0f;
|
||||||
|
|
||||||
|
memcpy(&scale, (float *) KQV->op_params + 0, sizeof(float));
|
||||||
|
memcpy(&max_bias, (float *) KQV->op_params + 1, sizeof(float));
|
||||||
|
memcpy(&logit_softcap, (float *) KQV->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
if (logit_softcap != 0.0f) {
|
||||||
|
scale /= logit_softcap;
|
||||||
|
}
|
||||||
|
|
||||||
|
const uint32_t n_head = Q->ne[2];
|
||||||
|
const uint32_t n_head_log2 = 1u << (uint32_t) floorf(log2f((float) n_head));
|
||||||
|
|
||||||
|
const float m0 = powf(2.0f, -(max_bias ) / n_head_log2);
|
||||||
|
const float m1 = powf(2.0f, -(max_bias / 2.0f) / n_head_log2);
|
||||||
|
|
||||||
|
fattn_kernel<<<blocks_num, block_dim, shmem, main_stream>>>(
|
||||||
|
(const char *) Q->data,
|
||||||
|
K_data,
|
||||||
|
V_data,
|
||||||
|
mask ? ((const char *) mask->data) : nullptr,
|
||||||
|
(parallel_blocks) == 1 ? (float *) KQV->data : dst_tmp.ptr, dst_tmp_meta.ptr,
|
||||||
|
scale, max_bias, m0, m1, n_head_log2, logit_softcap,
|
||||||
|
Q->ne[0], Q->ne[1], Q->ne[2], Q->ne[3],
|
||||||
|
K->ne[0], K->ne[1], K->ne[2], K->ne[3],
|
||||||
|
mask ? mask->ne[1] : 0, mask ? mask->nb[1] : 0,
|
||||||
|
Q->nb[1], Q->nb[2], Q->nb[3],
|
||||||
|
nb11, nb12, nb13,
|
||||||
|
nb21, nb22, nb23,
|
||||||
|
KQV->ne[0], KQV->ne[1], KQV->ne[2], KQV->ne[3]
|
||||||
|
);
|
||||||
|
CUDA_CHECK(cudaGetLastError());
|
||||||
|
|
||||||
|
if ((parallel_blocks) == 1) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const dim3 block_dim_combine(D, 1, 1);
|
||||||
|
const dim3 blocks_num_combine(Q->ne[1], blocks_num.y, blocks_num.z);
|
||||||
|
const int shmem_combine = 0;
|
||||||
|
|
||||||
|
flash_attn_combine_results<D, parallel_blocks>
|
||||||
|
<<<blocks_num_combine, block_dim_combine, shmem_combine, main_stream>>>
|
||||||
|
(dst_tmp.ptr, dst_tmp_meta.ptr, (float *) KQV->data);
|
||||||
|
CUDA_CHECK(cudaGetLastError());
|
||||||
|
}
|
379
llama/ggml-cuda/fattn-tile-f16.cu
Normal file
379
llama/ggml-cuda/fattn-tile-f16.cu
Normal file
|
@ -0,0 +1,379 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "fattn-common.cuh"
|
||||||
|
#include "fattn-tile-f16.cuh"
|
||||||
|
|
||||||
|
#define FATTN_KQ_STRIDE_TILE_F16 64
|
||||||
|
|
||||||
|
template<int D, int ncols, int nwarps, int parallel_blocks, bool use_logit_softcap> // D == head size
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
__launch_bounds__(nwarps*WARP_SIZE, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void flash_attn_tile_ext_f16(
|
||||||
|
const char * __restrict__ Q,
|
||||||
|
const char * __restrict__ K,
|
||||||
|
const char * __restrict__ V,
|
||||||
|
const char * __restrict__ mask,
|
||||||
|
float * __restrict__ dst,
|
||||||
|
float2 * __restrict__ dst_meta,
|
||||||
|
const float scale,
|
||||||
|
const float max_bias,
|
||||||
|
const float m0,
|
||||||
|
const float m1,
|
||||||
|
const uint32_t n_head_log2,
|
||||||
|
const float logit_softcap,
|
||||||
|
const int ne00,
|
||||||
|
const int ne01,
|
||||||
|
const int ne02,
|
||||||
|
const int ne03,
|
||||||
|
const int ne10,
|
||||||
|
const int ne11,
|
||||||
|
const int ne12,
|
||||||
|
const int ne13,
|
||||||
|
const int ne31,
|
||||||
|
const int nb31,
|
||||||
|
const int nb01,
|
||||||
|
const int nb02,
|
||||||
|
const int nb03,
|
||||||
|
const int nb11,
|
||||||
|
const int nb12,
|
||||||
|
const int nb13,
|
||||||
|
const int nb21,
|
||||||
|
const int nb22,
|
||||||
|
const int nb23,
|
||||||
|
const int ne0,
|
||||||
|
const int ne1,
|
||||||
|
const int ne2,
|
||||||
|
const int ne3) {
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
// Skip unused kernel variants for faster compilation:
|
||||||
|
if (use_logit_softcap && !(D == 128 || D == 256)) {
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
//In this kernel Q, K, V are matrices while i, j, k are matrix indices.
|
||||||
|
|
||||||
|
const int ic0 = (blockIdx.x / parallel_blocks) * ncols; // Index of the Q/QKV column to work on.
|
||||||
|
const int ip = blockIdx.x % parallel_blocks; // Index in group of blocks running for the same column in parallel.
|
||||||
|
|
||||||
|
const int gqa_ratio = ne02 / ne12; // With grouped query attention there are > 1 Q matrices per K, V matrix.
|
||||||
|
const float2 * Q_f2 = (const float2 *) (Q + nb02* blockIdx.y + nb01*ic0);
|
||||||
|
const half2 * K_h2 = (const half2 *) (K + nb12*(blockIdx.y / gqa_ratio));
|
||||||
|
const half2 * V_h2 = (const half2 *) (V + nb12*(blockIdx.y / gqa_ratio)); // K and V have same shape
|
||||||
|
const half * maskh = (const half *) mask + ne11*ic0;
|
||||||
|
|
||||||
|
const int stride_KV2 = nb11 / sizeof(half2);
|
||||||
|
|
||||||
|
const float slopef = get_alibi_slope(max_bias, blockIdx.y, n_head_log2, m0, m1);
|
||||||
|
const half slopeh = __float2half(slopef);
|
||||||
|
|
||||||
|
static_assert(D % (2*WARP_SIZE) == 0, "D not divisible by 2*WARP_SIZE == 64.");
|
||||||
|
|
||||||
|
__shared__ half KQ[ncols*FATTN_KQ_STRIDE_TILE_F16];
|
||||||
|
half2 * KQ2 = (half2 *) KQ;
|
||||||
|
|
||||||
|
__shared__ half2 KV_tmp[FATTN_KQ_STRIDE_TILE_F16][D/2 + 1]; // Pad D to avoid memory bank conflicts.
|
||||||
|
|
||||||
|
half kqmax[ncols/nwarps];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
kqmax[j0/nwarps] = -HALF_MAX_HALF;
|
||||||
|
}
|
||||||
|
half2 kqsum[ncols/nwarps] = {{0.0f, 0.0f}};
|
||||||
|
|
||||||
|
half2 VKQ[ncols/nwarps][(D/2)/WARP_SIZE] = {{{0.0f, 0.0f}}};
|
||||||
|
|
||||||
|
// Convert Q to half2 and store in registers:
|
||||||
|
__shared__ half2 Q_h2[ncols][D/2];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
const float2 tmp = ic0 + j < ne01 ? Q_f2[j*(nb01/sizeof(float2)) + i] : make_float2(0.0f, 0.0f);
|
||||||
|
Q_h2[j][i] = make_half2(scale, scale) * make_half2(tmp.x, tmp.y);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
const int k_start = parallel_blocks == 1 ? 0 : ip*FATTN_KQ_STRIDE_TILE_F16;
|
||||||
|
for (int k_VKQ_0 = k_start; k_VKQ_0 < ne11; k_VKQ_0 += parallel_blocks*FATTN_KQ_STRIDE_TILE_F16) {
|
||||||
|
// Calculate KQ tile and keep track of new maximum KQ values:
|
||||||
|
|
||||||
|
half kqmax_new[ncols/nwarps];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/nwarps; ++j) {
|
||||||
|
kqmax_new[j] = kqmax[j];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F16; i_KQ_0 += nwarps) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D/2; k_KQ_0 += WARP_SIZE) {
|
||||||
|
const int k_KQ = k_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
KV_tmp[i_KQ][k_KQ] = K_h2[(k_VKQ_0 + i_KQ)*stride_KV2 + k_KQ];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
half2 sum2[FATTN_KQ_STRIDE_TILE_F16/WARP_SIZE][ncols/nwarps] = {{{0.0f, 0.0f}}};
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ = 0; k_KQ < D/2; ++k_KQ) {
|
||||||
|
half2 K_k[FATTN_KQ_STRIDE_TILE_F16/WARP_SIZE];
|
||||||
|
half2 Q_k[ncols/nwarps];
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F16; i_KQ_0 += WARP_SIZE) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
K_k[i_KQ_0/WARP_SIZE] = KV_tmp[i_KQ][k_KQ];
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_KQ_0 = 0; j_KQ_0 < ncols; j_KQ_0 += nwarps) {
|
||||||
|
const int j_KQ = j_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
Q_k[j_KQ_0/nwarps] = Q_h2[j_KQ][k_KQ];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F16; i_KQ_0 += WARP_SIZE) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_KQ_0 = 0; j_KQ_0 < ncols; j_KQ_0 += nwarps) {
|
||||||
|
sum2[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps] += K_k[i_KQ_0/WARP_SIZE]*Q_k[j_KQ_0/nwarps];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F16; i_KQ_0 += WARP_SIZE) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_KQ_0 = 0; j_KQ_0 < ncols; j_KQ_0 += nwarps) {
|
||||||
|
const int j_KQ = j_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
half sum;
|
||||||
|
if (use_logit_softcap) {
|
||||||
|
const float2 tmp = __half22float2(sum2[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps]);
|
||||||
|
sum = logit_softcap * tanhf(tmp.x + tmp.y);
|
||||||
|
} else {
|
||||||
|
sum = __low2half(sum2[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps]) + __high2half(sum2[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps]);
|
||||||
|
}
|
||||||
|
sum += mask ? slopeh*maskh[j_KQ*ne11 + k_VKQ_0 + i_KQ] : __float2half(0.0f);
|
||||||
|
|
||||||
|
kqmax_new[j_KQ_0/nwarps] = ggml_cuda_hmax(kqmax_new[j_KQ_0/nwarps], sum);
|
||||||
|
|
||||||
|
KQ[j_KQ*FATTN_KQ_STRIDE_TILE_F16 + i_KQ] = sum;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
kqmax_new[j0/nwarps] = warp_reduce_max(kqmax_new[j0/nwarps]);
|
||||||
|
const half2 KQ_max_scale = __half2half2(hexp(kqmax[j0/nwarps] - kqmax_new[j0/nwarps]));
|
||||||
|
kqmax[j0/nwarps] = kqmax_new[j0/nwarps];
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < FATTN_KQ_STRIDE_TILE_F16/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
const half2 diff = KQ2[j*(FATTN_KQ_STRIDE_TILE_F16/2) + i] - __half2half2(kqmax[j0/nwarps]);
|
||||||
|
const half2 val = h2exp(diff);
|
||||||
|
kqsum[j0/nwarps] = kqsum[j0/nwarps]*KQ_max_scale + val;
|
||||||
|
KQ2[j*(FATTN_KQ_STRIDE_TILE_F16/2) + i] = val;
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE] *= KQ_max_scale;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE_TILE_F16; k0 += nwarps) {
|
||||||
|
const int k = k0 + threadIdx.y;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
KV_tmp[k][i] = V_h2[(k_VKQ_0 + k)*stride_KV2 + i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE_TILE_F16; k0 += 2) {
|
||||||
|
half2 V_k[(D/2)/WARP_SIZE][2];
|
||||||
|
half2 KQ_k[ncols/nwarps];
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
V_k[i0/WARP_SIZE][0] = KV_tmp[k0 + 0][i];
|
||||||
|
V_k[i0/WARP_SIZE][1] = KV_tmp[k0 + 1][i];
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
KQ_k[j0/nwarps] = KQ2[j*(FATTN_KQ_STRIDE_TILE_F16/2) + k0/2];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE] += V_k[i0/WARP_SIZE][0]* __low2half2(KQ_k[j0/nwarps]);
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE] += V_k[i0/WARP_SIZE][1]*__high2half2(KQ_k[j0/nwarps]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_VKQ_0 = 0; j_VKQ_0 < ncols; j_VKQ_0 += nwarps) {
|
||||||
|
const int j_VKQ = j_VKQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
if (ic0 + j_VKQ >= ne01) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
half kqsum_j = __low2half(kqsum[j_VKQ_0/nwarps]) + __high2half(kqsum[j_VKQ_0/nwarps]);
|
||||||
|
kqsum_j = warp_reduce_sum(kqsum_j);
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i00 = 0; i00 < D; i00 += 2*WARP_SIZE) {
|
||||||
|
const int i0 = i00 + 2*threadIdx.x;
|
||||||
|
|
||||||
|
half2 dst_val = VKQ[j_VKQ_0/nwarps][i0/(2*WARP_SIZE)];
|
||||||
|
if (parallel_blocks == 1) {
|
||||||
|
dst_val /= __half2half2(kqsum_j);
|
||||||
|
}
|
||||||
|
const int j_dst = (ic0 + j_VKQ)*parallel_blocks + ip;
|
||||||
|
dst[j_dst*D*gridDim.y + D*blockIdx.y + i0 + 0] = __low2float(dst_val);
|
||||||
|
dst[j_dst*D*gridDim.y + D*blockIdx.y + i0 + 1] = __high2float(dst_val);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parallel_blocks != 1 && threadIdx.x == 0) {
|
||||||
|
dst_meta[(ic0 + j_VKQ)*gridDim.y*parallel_blocks + blockIdx.y*parallel_blocks + ip] = make_float2(kqmax[j_VKQ_0/nwarps], kqsum_j);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int cols_per_block, int parallel_blocks, bool use_logit_softcap>
|
||||||
|
void launch_fattn_tile_f16_64_128(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64: {
|
||||||
|
constexpr int D = 64;
|
||||||
|
constexpr int nwarps = 8;
|
||||||
|
fattn_kernel_t fattn_kernel = flash_attn_tile_ext_f16<D, cols_per_block, nwarps, parallel_blocks, use_logit_softcap>;
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
} break;
|
||||||
|
case 128: {
|
||||||
|
constexpr int D = 128;
|
||||||
|
constexpr int nwarps = 8;
|
||||||
|
fattn_kernel_t fattn_kernel = flash_attn_tile_ext_f16<D, cols_per_block, nwarps, parallel_blocks, use_logit_softcap>;
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
} break;
|
||||||
|
default: {
|
||||||
|
GGML_ABORT("FlashAttention without tensor cores only supports head sizes 64 and 128.");
|
||||||
|
} break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_flash_attn_ext_tile_f16(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
|
||||||
|
const int32_t precision = KQV->op_params[3];
|
||||||
|
GGML_ASSERT(precision == GGML_PREC_DEFAULT);
|
||||||
|
|
||||||
|
float logit_softcap;
|
||||||
|
memcpy(&logit_softcap, (const float *) KQV->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 16) {
|
||||||
|
constexpr int cols_per_block = 16;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
launch_fattn_tile_f16_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
launch_fattn_tile_f16_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 32) {
|
||||||
|
constexpr int cols_per_block = 32;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
launch_fattn_tile_f16_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
launch_fattn_tile_f16_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr int cols_per_block = 32;
|
||||||
|
constexpr int parallel_blocks = 1;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
launch_fattn_tile_f16_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
launch_fattn_tile_f16_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
}
|
29
llama/ggml-cuda/fattn-tile-f16.cuh
Normal file
29
llama/ggml-cuda/fattn-tile-f16.cuh
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_flash_attn_ext_tile_f16(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
371
llama/ggml-cuda/fattn-tile-f32.cu
Normal file
371
llama/ggml-cuda/fattn-tile-f32.cu
Normal file
|
@ -0,0 +1,371 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "fattn-common.cuh"
|
||||||
|
#include "fattn-tile-f32.cuh"
|
||||||
|
|
||||||
|
#define FATTN_KQ_STRIDE_TILE_F32 32
|
||||||
|
|
||||||
|
template<int D, int ncols, int nwarps, int parallel_blocks, bool use_logit_softcap> // D == head size
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
__launch_bounds__(nwarps*WARP_SIZE, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void flash_attn_tile_ext_f32(
|
||||||
|
const char * __restrict__ Q,
|
||||||
|
const char * __restrict__ K,
|
||||||
|
const char * __restrict__ V,
|
||||||
|
const char * __restrict__ mask,
|
||||||
|
float * __restrict__ dst,
|
||||||
|
float2 * __restrict__ dst_meta,
|
||||||
|
const float scale,
|
||||||
|
const float max_bias,
|
||||||
|
const float m0,
|
||||||
|
const float m1,
|
||||||
|
const uint32_t n_head_log2,
|
||||||
|
const float logit_softcap,
|
||||||
|
const int ne00,
|
||||||
|
const int ne01,
|
||||||
|
const int ne02,
|
||||||
|
const int ne03,
|
||||||
|
const int ne10,
|
||||||
|
const int ne11,
|
||||||
|
const int ne12,
|
||||||
|
const int ne13,
|
||||||
|
const int ne31,
|
||||||
|
const int nb31,
|
||||||
|
const int nb01,
|
||||||
|
const int nb02,
|
||||||
|
const int nb03,
|
||||||
|
const int nb11,
|
||||||
|
const int nb12,
|
||||||
|
const int nb13,
|
||||||
|
const int nb21,
|
||||||
|
const int nb22,
|
||||||
|
const int nb23,
|
||||||
|
const int ne0,
|
||||||
|
const int ne1,
|
||||||
|
const int ne2,
|
||||||
|
const int ne3) {
|
||||||
|
// Skip unused kernel variants for faster compilation:
|
||||||
|
if (use_logit_softcap && !(D == 128 || D == 256)) {
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
//In this kernel Q, K, V are matrices while i, j, k are matrix indices.
|
||||||
|
|
||||||
|
const int ic0 = (blockIdx.x / parallel_blocks) * ncols; // Index of the Q/QKV column to work on.
|
||||||
|
const int ip = blockIdx.x % parallel_blocks; // Index in group of blocks running for the same column in parallel.
|
||||||
|
|
||||||
|
const int gqa_ratio = ne02 / ne12; // With grouped query attention there are > 1 Q matrices per K, V matrix.
|
||||||
|
const float2 * Q_f2 = (const float2 *) (Q + nb02* blockIdx.y + nb01*ic0);
|
||||||
|
const half2 * K_h2 = (const half2 *) (K + nb12*(blockIdx.y / gqa_ratio));
|
||||||
|
const half2 * V_h2 = (const half2 *) (V + nb12*(blockIdx.y / gqa_ratio)); // K and V have same shape
|
||||||
|
const half * maskh = (const half *) mask + ne11*ic0;
|
||||||
|
|
||||||
|
const int stride_KV2 = nb11 / sizeof(half2);
|
||||||
|
|
||||||
|
const float slope = get_alibi_slope(max_bias, blockIdx.y, n_head_log2, m0, m1);
|
||||||
|
|
||||||
|
static_assert(D % (2*WARP_SIZE) == 0, "D not divisible by 2*WARP_SIZE == 64.");
|
||||||
|
|
||||||
|
__shared__ float KQ[ncols*FATTN_KQ_STRIDE_TILE_F32];
|
||||||
|
|
||||||
|
__shared__ float KV_tmp[FATTN_KQ_STRIDE_TILE_F32][D + 1]; // Pad D to avoid memory bank conflicts.
|
||||||
|
float2 * KV_tmp2 = (float2 *) KV_tmp;
|
||||||
|
|
||||||
|
float kqmax[ncols/nwarps];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
kqmax[j0/nwarps] = -FLT_MAX/2.0f;
|
||||||
|
}
|
||||||
|
float kqsum[ncols/nwarps] = {0.0f};
|
||||||
|
|
||||||
|
float2 VKQ[ncols/nwarps][(D/2)/WARP_SIZE] = {{{0.0f, 0.0f}}};
|
||||||
|
|
||||||
|
// Convert Q to half2 and store in registers:
|
||||||
|
__shared__ float Q_f[ncols][D];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D; i0 += 2*WARP_SIZE) {
|
||||||
|
float2 tmp = ic0 + j < ne01 ? Q_f2[j*(nb01/sizeof(float2)) + i0/2 + threadIdx.x] : make_float2(0.0f, 0.0f);
|
||||||
|
Q_f[j][i0 + 0*WARP_SIZE + threadIdx.x] = tmp.x * scale;
|
||||||
|
Q_f[j][i0 + 1*WARP_SIZE + threadIdx.x] = tmp.y * scale;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
const int k_start = parallel_blocks == 1 ? 0 : ip*FATTN_KQ_STRIDE_TILE_F32;
|
||||||
|
for (int k_VKQ_0 = k_start; k_VKQ_0 < ne11; k_VKQ_0 += parallel_blocks*FATTN_KQ_STRIDE_TILE_F32) {
|
||||||
|
// Calculate KQ tile and keep track of new maximum KQ values:
|
||||||
|
|
||||||
|
float kqmax_new[ncols/nwarps];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/nwarps; ++j) {
|
||||||
|
kqmax_new[j] = kqmax[j];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F32; i_KQ_0 += nwarps) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D; k_KQ_0 += 2*WARP_SIZE) {
|
||||||
|
const half2 tmp = K_h2[(k_VKQ_0 + i_KQ)*stride_KV2 + k_KQ_0/2 + threadIdx.x];
|
||||||
|
KV_tmp[i_KQ][k_KQ_0 + 0*WARP_SIZE + threadIdx.x] = __low2float(tmp);
|
||||||
|
KV_tmp[i_KQ][k_KQ_0 + 1*WARP_SIZE + threadIdx.x] = __high2float(tmp);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
float sum[FATTN_KQ_STRIDE_TILE_F32/WARP_SIZE][ncols/nwarps] = {{0.0f}};
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ = 0; k_KQ < D; ++k_KQ) {
|
||||||
|
float K_k[FATTN_KQ_STRIDE_TILE_F32/WARP_SIZE];
|
||||||
|
float Q_k[ncols/nwarps];
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F32; i_KQ_0 += WARP_SIZE) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
K_k[i_KQ_0/WARP_SIZE] = KV_tmp[i_KQ][k_KQ];
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_KQ_0 = 0; j_KQ_0 < ncols; j_KQ_0 += nwarps) {
|
||||||
|
const int j_KQ = j_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
Q_k[j_KQ_0/nwarps] = Q_f[j_KQ][k_KQ];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F32; i_KQ_0 += WARP_SIZE) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_KQ_0 = 0; j_KQ_0 < ncols; j_KQ_0 += nwarps) {
|
||||||
|
sum[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps] += K_k[i_KQ_0/WARP_SIZE] * Q_k[j_KQ_0/nwarps];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE_TILE_F32; i_KQ_0 += WARP_SIZE) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.x;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_KQ_0 = 0; j_KQ_0 < ncols; j_KQ_0 += nwarps) {
|
||||||
|
const int j_KQ = j_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
if (use_logit_softcap) {
|
||||||
|
sum[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps] = logit_softcap * tanhf(sum[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps]);
|
||||||
|
}
|
||||||
|
|
||||||
|
sum[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps] += mask ? slope*__half2float(maskh[j_KQ*ne11 + k_VKQ_0 + i_KQ]) : 0.0f;
|
||||||
|
|
||||||
|
kqmax_new[j_KQ_0/nwarps] = fmaxf(kqmax_new[j_KQ_0/nwarps], sum[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps]);
|
||||||
|
|
||||||
|
KQ[j_KQ*FATTN_KQ_STRIDE_TILE_F32 + i_KQ] = sum[i_KQ_0/WARP_SIZE][j_KQ_0/nwarps];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
kqmax_new[j0/nwarps] = warp_reduce_max(kqmax_new[j0/nwarps]);
|
||||||
|
const float KQ_max_scale = expf(kqmax[j0/nwarps] - kqmax_new[j0/nwarps]);
|
||||||
|
kqmax[j0/nwarps] = kqmax_new[j0/nwarps];
|
||||||
|
|
||||||
|
float kqsum_add = 0.0f;
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < FATTN_KQ_STRIDE_TILE_F32; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
const float diff = KQ[j*FATTN_KQ_STRIDE_TILE_F32 + i] - kqmax[j0/nwarps];
|
||||||
|
const float val = expf(diff);
|
||||||
|
kqsum_add += val;
|
||||||
|
KQ[j*FATTN_KQ_STRIDE_TILE_F32 + i] = val;
|
||||||
|
}
|
||||||
|
kqsum[j0/nwarps] = kqsum[j0/nwarps]*KQ_max_scale + kqsum_add;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE].x *= KQ_max_scale;
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE].y *= KQ_max_scale;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE_TILE_F32; k0 += nwarps) {
|
||||||
|
const int k = k0 + threadIdx.y;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
KV_tmp2[k*(D/2) + i].x = __low2float(V_h2[(k_VKQ_0 + k)*stride_KV2 + i]);
|
||||||
|
KV_tmp2[k*(D/2) + i].y = __high2float(V_h2[(k_VKQ_0 + k)*stride_KV2 + i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k = 0; k < FATTN_KQ_STRIDE_TILE_F32; ++k) {
|
||||||
|
float2 V_k[(D/2)/WARP_SIZE];
|
||||||
|
float KQ_k[ncols/nwarps];
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
V_k[i0/WARP_SIZE] = KV_tmp2[k*(D/2) + i];
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
KQ_k[j0/nwarps] = KQ[j*FATTN_KQ_STRIDE_TILE_F32 + k];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE].x += V_k[i0/WARP_SIZE].x*KQ_k[j0/nwarps];
|
||||||
|
VKQ[j0/nwarps][i0/WARP_SIZE].y += V_k[i0/WARP_SIZE].y*KQ_k[j0/nwarps];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_VKQ_0 = 0; j_VKQ_0 < ncols; j_VKQ_0 += nwarps) {
|
||||||
|
const int j_VKQ = j_VKQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
if (ic0 + j_VKQ >= ne01) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
float kqsum_j = kqsum[j_VKQ_0/nwarps];
|
||||||
|
kqsum_j = warp_reduce_sum(kqsum_j);
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i00 = 0; i00 < D; i00 += 2*WARP_SIZE) {
|
||||||
|
const int i0 = i00 + 2*threadIdx.x;
|
||||||
|
|
||||||
|
float2 dst_val = VKQ[j_VKQ_0/nwarps][i0/(2*WARP_SIZE)];
|
||||||
|
if (parallel_blocks == 1) {
|
||||||
|
dst_val.x /= kqsum_j;
|
||||||
|
dst_val.y /= kqsum_j;
|
||||||
|
}
|
||||||
|
const int j_dst = (ic0 + j_VKQ)*parallel_blocks + ip;
|
||||||
|
dst[j_dst*D*gridDim.y + D*blockIdx.y + i0 + 0] = dst_val.x;
|
||||||
|
dst[j_dst*D*gridDim.y + D*blockIdx.y + i0 + 1] = dst_val.y;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parallel_blocks != 1 && threadIdx.x == 0) {
|
||||||
|
dst_meta[(ic0 + j_VKQ)*gridDim.y*parallel_blocks + blockIdx.y*parallel_blocks + ip] = make_float2(kqmax[j_VKQ_0/nwarps], kqsum_j);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int cols_per_block, int parallel_blocks, bool use_logit_softcap>
|
||||||
|
void launch_fattn_tile_f32_64_128(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64: {
|
||||||
|
constexpr int D = 64;
|
||||||
|
constexpr int nwarps = 8;
|
||||||
|
fattn_kernel_t fattn_kernel = flash_attn_tile_ext_f32<D, cols_per_block, nwarps, parallel_blocks, use_logit_softcap>;
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
} break;
|
||||||
|
case 128: {
|
||||||
|
constexpr int D = 128;
|
||||||
|
constexpr int nwarps = 8;
|
||||||
|
fattn_kernel_t fattn_kernel = flash_attn_tile_ext_f32<D, cols_per_block, nwarps, parallel_blocks, use_logit_softcap>;
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
} break;
|
||||||
|
default: {
|
||||||
|
GGML_ABORT("FlashAttention without tensor cores only supports head sizes 64 and 128.");
|
||||||
|
} break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_flash_attn_ext_tile_f32(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
|
||||||
|
float logit_softcap;
|
||||||
|
memcpy(&logit_softcap, (const float *) KQV->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 16) {
|
||||||
|
constexpr int cols_per_block = 16;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
launch_fattn_tile_f32_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
launch_fattn_tile_f32_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 32) {
|
||||||
|
constexpr int cols_per_block = 32;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
launch_fattn_tile_f32_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
launch_fattn_tile_f32_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr int cols_per_block = 32;
|
||||||
|
constexpr int parallel_blocks = 1;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
launch_fattn_tile_f32_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
launch_fattn_tile_f32_64_128<cols_per_block, parallel_blocks, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
}
|
29
llama/ggml-cuda/fattn-tile-f32.cuh
Normal file
29
llama/ggml-cuda/fattn-tile-f32.cuh
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_flash_attn_ext_tile_f32(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
468
llama/ggml-cuda/fattn-vec-f16.cuh
Normal file
468
llama/ggml-cuda/fattn-vec-f16.cuh
Normal file
|
@ -0,0 +1,468 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "fattn-common.cuh"
|
||||||
|
|
||||||
|
template<int D, int ncols, int parallel_blocks, ggml_type type_K, ggml_type type_V, bool use_logit_softcap> // D == head size
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
__launch_bounds__(D, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void flash_attn_vec_ext_f16(
|
||||||
|
const char * __restrict__ Q,
|
||||||
|
const char * __restrict__ K,
|
||||||
|
const char * __restrict__ V,
|
||||||
|
const char * __restrict__ mask,
|
||||||
|
float * __restrict__ dst,
|
||||||
|
float2 * __restrict__ dst_meta,
|
||||||
|
const float scale,
|
||||||
|
const float max_bias,
|
||||||
|
const float m0,
|
||||||
|
const float m1,
|
||||||
|
const uint32_t n_head_log2,
|
||||||
|
const float logit_softcap,
|
||||||
|
const int ne00,
|
||||||
|
const int ne01,
|
||||||
|
const int ne02,
|
||||||
|
const int ne03,
|
||||||
|
const int ne10,
|
||||||
|
const int ne11,
|
||||||
|
const int ne12,
|
||||||
|
const int ne13,
|
||||||
|
const int ne31,
|
||||||
|
const int nb31,
|
||||||
|
const int nb01,
|
||||||
|
const int nb02,
|
||||||
|
const int nb03,
|
||||||
|
const int nb11,
|
||||||
|
const int nb12,
|
||||||
|
const int nb13,
|
||||||
|
const int nb21,
|
||||||
|
const int nb22,
|
||||||
|
const int nb23,
|
||||||
|
const int ne0,
|
||||||
|
const int ne1,
|
||||||
|
const int ne2,
|
||||||
|
const int ne3) {
|
||||||
|
#ifdef FP16_AVAILABLE
|
||||||
|
// Skip unused kernel variants for faster compilation:
|
||||||
|
if (use_logit_softcap && !(D == 128 || D == 256)) {
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
//In this kernel Q, K, V are matrices while i, j, k are matrix indices.
|
||||||
|
|
||||||
|
constexpr vec_dot_KQ_f16_t vec_dot_KQ = get_vec_dot_KQ_f16<D>(type_K);
|
||||||
|
constexpr bool Q_q8_1 = type_K != GGML_TYPE_F16;
|
||||||
|
constexpr dequantize_1_f16_t dequantize_1_v = get_dequantize_1_f16(type_V);
|
||||||
|
|
||||||
|
const int ic0 = (blockIdx.x / parallel_blocks) * ncols; // Index of the Q/QKV column to work on.
|
||||||
|
const int ip = blockIdx.x % parallel_blocks; // Index in group of blocks running for the same column in parallel.
|
||||||
|
|
||||||
|
const int gqa_ratio = ne02 / ne12; // With grouped query attention there are > 1 Q matrices per K, V matrix.
|
||||||
|
Q += nb02* blockIdx.y + nb01*ic0;
|
||||||
|
K += nb12*(blockIdx.y / gqa_ratio);
|
||||||
|
V += nb22*(blockIdx.y / gqa_ratio);
|
||||||
|
|
||||||
|
const half * maskh = (const half *) mask + ne11*ic0;
|
||||||
|
|
||||||
|
const float slopef = get_alibi_slope(max_bias, blockIdx.y, n_head_log2, m0, m1);
|
||||||
|
const half slopeh = __float2half(slopef);
|
||||||
|
|
||||||
|
static_assert(D % (2*WARP_SIZE) == 0, "D not divisible by 2*WARP_SIZE == 64.");
|
||||||
|
constexpr int nwarps = D / WARP_SIZE;
|
||||||
|
const int tid = WARP_SIZE*threadIdx.y + threadIdx.x;
|
||||||
|
__builtin_assume(tid < D);
|
||||||
|
|
||||||
|
__shared__ half KQ[ncols*D];
|
||||||
|
half2 * KQ2 = (half2 *) KQ;
|
||||||
|
|
||||||
|
half kqmax[ncols];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
kqmax[j] = -HALF_MAX_HALF;
|
||||||
|
}
|
||||||
|
half kqsum[ncols] = {0.0f};
|
||||||
|
|
||||||
|
__shared__ half kqmax_shared[ncols][WARP_SIZE];
|
||||||
|
__shared__ half kqsum_shared[ncols][WARP_SIZE];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
if (threadIdx.y == 0) {
|
||||||
|
kqmax_shared[j][threadIdx.x] = -HALF_MAX_HALF;
|
||||||
|
kqsum_shared[j][threadIdx.x] = 0.0f;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
// Convert Q to half2 (f16 K) or q8_1 (quantized K) and store in registers:
|
||||||
|
half2 Q_h2[ncols][D/(2*WARP_SIZE)];
|
||||||
|
int Q_i32[ncols][D/(sizeof(int)*QK8_1) == 0 ? 1 : D/(sizeof(int)*QK8_1)];
|
||||||
|
half2 Q_ds[ncols][D/QK8_1 == 0 ? 1 : D/QK8_1];
|
||||||
|
if (Q_q8_1) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
if (j0 + nwarps > ncols && j >= ncols) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reuse KQ as temporary storage for converting Q to q8_1:
|
||||||
|
int * tmp_q_i32 = (int *) &KQ[j*D];
|
||||||
|
half2 * tmp_q_ds = (half2 *) (tmp_q_i32 + D/sizeof(int));
|
||||||
|
|
||||||
|
// Set memory to zero if out of bounds:
|
||||||
|
if (ncols > 2 && ic0 + j >= ne01) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/sizeof(int); i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
tmp_q_i32[i] = 0;
|
||||||
|
}
|
||||||
|
if (threadIdx.x < D/QK8_1) {
|
||||||
|
tmp_q_ds[threadIdx.x] = make_half2(0.0f, 0.0f);
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float * Q_f = (const float *) (Q + j*nb01);
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/sizeof(int); i0 += WARP_SIZE) {
|
||||||
|
quantize_q8_1_to_shared<half2>(Q_f + 4*i0, scale, tmp_q_i32, tmp_q_ds);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
int * tmp_q_i32 = (int *) &KQ[j*D];
|
||||||
|
half2 * tmp_q_ds = (half2 *) (tmp_q_i32 + D/sizeof(int));
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/sizeof(int); i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
Q_i32[j][i0/WARP_SIZE] = tmp_q_i32[i];
|
||||||
|
Q_ds[j][i0/WARP_SIZE] = tmp_q_ds[i/QI8_1];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
} else {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
const float2 * Q_f2_j = (const float2 *) (Q + j*nb01);
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
const float2 tmp = ncols <= 2 || ic0 + j < ne01 ? Q_f2_j[i] : make_float2(0.0f, 0.0f);
|
||||||
|
Q_h2[j][i0/WARP_SIZE] = make_half2(scale, scale) * make_half2(tmp.x, tmp.y);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
KQ[j*D + tid] = -HALF_MAX_HALF;
|
||||||
|
}
|
||||||
|
|
||||||
|
half2 VKQ[ncols] = {{0.0f, 0.0f}};
|
||||||
|
|
||||||
|
const int k_start = parallel_blocks == 1 ? 0 : ip*D;
|
||||||
|
for (int k_VKQ_0 = k_start; k_VKQ_0 < ne11; k_VKQ_0 += parallel_blocks*D) {
|
||||||
|
// Calculate KQ tile and keep track of new maximum KQ values:
|
||||||
|
|
||||||
|
// For unknown reasons using a half array of size 1 for kqmax_new causes a performance regression,
|
||||||
|
// see https://github.com/ggerganov/llama.cpp/pull/7061 .
|
||||||
|
// Therefore this variable is defined twice but only used once (so that the compiler can optimize out the unused variable).
|
||||||
|
half kqmax_new = kqmax[0];
|
||||||
|
half kqmax_new_arr[ncols];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
kqmax_new_arr[j] = kqmax[j];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < D; i_KQ_0 += nwarps) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
if ((i_KQ_0 + nwarps > D && i_KQ >= D) || (FATTN_KQ_STRIDE % D != 0 && k_VKQ_0 + i_KQ >= ne11)) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
half sum = vec_dot_KQ(K + (k_VKQ_0 + i_KQ)*nb11, Q_h2[j], Q_i32[j], Q_ds[j]);
|
||||||
|
sum = warp_reduce_sum(sum);
|
||||||
|
|
||||||
|
if (use_logit_softcap) {
|
||||||
|
sum = logit_softcap*tanhf(sum);
|
||||||
|
}
|
||||||
|
|
||||||
|
sum += mask ? slopeh*maskh[j*ne11 + k_VKQ_0 + i_KQ] : __float2half(0.0f);
|
||||||
|
|
||||||
|
if (ncols == 1) {
|
||||||
|
kqmax_new = ggml_cuda_hmax(kqmax_new, sum);
|
||||||
|
} else {
|
||||||
|
kqmax_new_arr[j] = ggml_cuda_hmax(kqmax_new_arr[j], sum);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
KQ[j*D + i_KQ] = sum;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
half kqmax_new_j = ncols == 1 ? kqmax_new : kqmax_new_arr[j];
|
||||||
|
|
||||||
|
kqmax_new_j = warp_reduce_max(kqmax_new_j);
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
kqmax_shared[j][threadIdx.y] = kqmax_new_j;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
half kqmax_new_j = kqmax_shared[j][threadIdx.x];
|
||||||
|
kqmax_new_j = warp_reduce_max(kqmax_new_j);
|
||||||
|
|
||||||
|
const half KQ_max_scale = hexp(kqmax[j] - kqmax_new_j);
|
||||||
|
kqmax[j] = kqmax_new_j;
|
||||||
|
|
||||||
|
const half val = hexp(KQ[j*D + tid] - kqmax[j]);
|
||||||
|
kqsum[j] = kqsum[j]*KQ_max_scale + val;
|
||||||
|
KQ[j*D + tid] = val;
|
||||||
|
|
||||||
|
VKQ[j] *= __half2half2(KQ_max_scale);
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < D; k0 += 2) {
|
||||||
|
if (FATTN_KQ_STRIDE % D != 0 && k_VKQ_0 + k0 >= ne11) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
half2 V_k;
|
||||||
|
reinterpret_cast<half&>(V_k.x) = dequantize_1_v(V + (k_VKQ_0 + k0 + 0)*nb21, tid);
|
||||||
|
reinterpret_cast<half&>(V_k.y) = dequantize_1_v(V + (k_VKQ_0 + k0 + 1)*nb21, tid);
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
VKQ[j] += V_k*KQ2[j*(D/2) + k0/2];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
kqsum[j] = warp_reduce_sum(kqsum[j]);
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
kqsum_shared[j][threadIdx.y] = kqsum[j];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_VKQ = 0; j_VKQ < ncols; ++j_VKQ) {
|
||||||
|
if (ncols > 2 && ic0 + j_VKQ >= ne01) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
kqsum[j_VKQ] = kqsum_shared[j_VKQ][threadIdx.x];
|
||||||
|
kqsum[j_VKQ] = warp_reduce_sum(kqsum[j_VKQ]);
|
||||||
|
|
||||||
|
half dst_val = (__low2half(VKQ[j_VKQ]) + __high2half(VKQ[j_VKQ]));
|
||||||
|
if (parallel_blocks == 1) {
|
||||||
|
dst_val /= kqsum[j_VKQ];
|
||||||
|
}
|
||||||
|
const int j_dst = (ic0 + j_VKQ)*parallel_blocks + ip;
|
||||||
|
dst[j_dst*D*gridDim.y + D*blockIdx.y + tid] = dst_val;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parallel_blocks != 1 && tid < ncols && (ncols <= 2 || ic0 + tid < ne01)) {
|
||||||
|
dst_meta[(ic0 + tid)*gridDim.y*parallel_blocks + blockIdx.y*parallel_blocks + ip] = make_float2(kqmax[tid], kqsum[tid]);
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // FP16_AVAILABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D, int cols_per_block, int parallel_blocks, ggml_type type_K, ggml_type type_V, bool use_logit_softcap>
|
||||||
|
void ggml_cuda_flash_attn_ext_vec_f16_case_impl(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
constexpr int nwarps = D/WARP_SIZE;
|
||||||
|
fattn_kernel_t fattn_kernel = flash_attn_vec_ext_f16<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>;
|
||||||
|
constexpr bool need_f16_K = D != 128;
|
||||||
|
constexpr bool need_f16_V = D != 128 && D != 64;
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, need_f16_K, need_f16_V);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D, ggml_type type_K, ggml_type type_V>
|
||||||
|
void ggml_cuda_flash_attn_ext_vec_f16_case(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
const ggml_tensor * K = dst->src[1];
|
||||||
|
const ggml_tensor * V = dst->src[2];
|
||||||
|
|
||||||
|
const int32_t precision = KQV->op_params[3];
|
||||||
|
GGML_ASSERT(precision == GGML_PREC_DEFAULT);
|
||||||
|
|
||||||
|
GGML_ASSERT(K->type == type_K);
|
||||||
|
GGML_ASSERT(V->type == type_V);
|
||||||
|
|
||||||
|
float logit_softcap;
|
||||||
|
memcpy(&logit_softcap, (const float *) KQV->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
if (Q->ne[1] == 1) {
|
||||||
|
constexpr int cols_per_block = 1;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] == 2) {
|
||||||
|
constexpr int cols_per_block = 2;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 4) {
|
||||||
|
constexpr int cols_per_block = 4;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 8) {
|
||||||
|
constexpr int cols_per_block = 8;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr int cols_per_block = 8;
|
||||||
|
constexpr int parallel_blocks = 1;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#define DECL_FATTN_VEC_F16_CASE(D, type_K, type_V) \
|
||||||
|
template void ggml_cuda_flash_attn_ext_vec_f16_case \
|
||||||
|
<D, type_K, type_V>(ggml_backend_cuda_context & ctx, ggml_tensor * dst) \
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_F16);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_0);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_1);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_0);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_1);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q8_0);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F16_CASE(256, GGML_TYPE_F16, GGML_TYPE_F16);
|
446
llama/ggml-cuda/fattn-vec-f32.cuh
Normal file
446
llama/ggml-cuda/fattn-vec-f32.cuh
Normal file
|
@ -0,0 +1,446 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "fattn-common.cuh"
|
||||||
|
|
||||||
|
template<int D, int ncols, int parallel_blocks, ggml_type type_K, ggml_type type_V, bool use_logit_softcap> // D == head size
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
__launch_bounds__(D, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void flash_attn_vec_ext_f32(
|
||||||
|
const char * __restrict__ Q,
|
||||||
|
const char * __restrict__ K,
|
||||||
|
const char * __restrict__ V,
|
||||||
|
const char * __restrict__ mask,
|
||||||
|
float * __restrict__ dst,
|
||||||
|
float2 * __restrict__ dst_meta,
|
||||||
|
const float scale,
|
||||||
|
const float max_bias,
|
||||||
|
const float m0,
|
||||||
|
const float m1,
|
||||||
|
const uint32_t n_head_log2,
|
||||||
|
const float logit_softcap,
|
||||||
|
const int ne00,
|
||||||
|
const int ne01,
|
||||||
|
const int ne02,
|
||||||
|
const int ne03,
|
||||||
|
const int ne10,
|
||||||
|
const int ne11,
|
||||||
|
const int ne12,
|
||||||
|
const int ne13,
|
||||||
|
const int ne31,
|
||||||
|
const int nb31,
|
||||||
|
const int nb01,
|
||||||
|
const int nb02,
|
||||||
|
const int nb03,
|
||||||
|
const int nb11,
|
||||||
|
const int nb12,
|
||||||
|
const int nb13,
|
||||||
|
const int nb21,
|
||||||
|
const int nb22,
|
||||||
|
const int nb23,
|
||||||
|
const int ne0,
|
||||||
|
const int ne1,
|
||||||
|
const int ne2,
|
||||||
|
const int ne3) {
|
||||||
|
// Skip unused kernel variants for faster compilation:
|
||||||
|
if (use_logit_softcap && !(D == 128 || D == 256)) {
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
//In this kernel Q, K, V are matrices while i, j, k are matrix indices.
|
||||||
|
|
||||||
|
constexpr vec_dot_KQ_f32_t vec_dot_KQ = get_vec_dot_KQ_f32<D>(type_K);
|
||||||
|
constexpr bool Q_q8_1 = type_K != GGML_TYPE_F16;
|
||||||
|
constexpr dequantize_1_f32_t dequantize_1_v = get_dequantize_1_f32(type_V);
|
||||||
|
|
||||||
|
const int ic0 = (blockIdx.x / parallel_blocks) * ncols; // Index of the Q/QKV column to work on.
|
||||||
|
const int ip = blockIdx.x % parallel_blocks; // Index in group of blocks running for the same column in parallel.
|
||||||
|
|
||||||
|
const int gqa_ratio = ne02 / ne12; // With grouped query attention there are > 1 Q matrices per K, V matrix.
|
||||||
|
Q += nb02* blockIdx.y + nb01*ic0;
|
||||||
|
K += nb12*(blockIdx.y / gqa_ratio);
|
||||||
|
V += nb22*(blockIdx.y / gqa_ratio); // K and V have same shape
|
||||||
|
const half * maskh = (const half *) mask + ne11*ic0;
|
||||||
|
|
||||||
|
const float slope = get_alibi_slope(max_bias, blockIdx.y, n_head_log2, m0, m1);
|
||||||
|
|
||||||
|
static_assert(D % (2*WARP_SIZE) == 0, "D not divisible by 2*WARP_SIZE == 64.");
|
||||||
|
constexpr int nwarps = D / WARP_SIZE;
|
||||||
|
const int tid = WARP_SIZE*threadIdx.y + threadIdx.x;
|
||||||
|
__builtin_assume(tid < D);
|
||||||
|
|
||||||
|
__shared__ float KQ[ncols*D];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
KQ[j*D + tid] = -FLT_MAX/2.0f;
|
||||||
|
}
|
||||||
|
|
||||||
|
float kqmax[ncols];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
kqmax[j] = -FLT_MAX/2.0f;
|
||||||
|
}
|
||||||
|
float kqsum[ncols] = {0.0f};
|
||||||
|
|
||||||
|
__shared__ float kqmax_shared[ncols][WARP_SIZE];
|
||||||
|
__shared__ float kqsum_shared[ncols][WARP_SIZE];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
if (threadIdx.y == 0) {
|
||||||
|
kqmax_shared[j][threadIdx.x] = -FLT_MAX/2.0f;
|
||||||
|
kqsum_shared[j][threadIdx.x] = 0.0f;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
// Convert Q to float2 (f16 K) or q8_1 (quantized K) and store in registers:
|
||||||
|
float2 Q_f2[ncols][D/(2*WARP_SIZE)];
|
||||||
|
int Q_i32[ncols][D/(sizeof(int)*QK8_1) == 0 ? 1 : D >= D/(sizeof(int)*QK8_1)];
|
||||||
|
float2 Q_ds[ncols][D/QK8_1 == 0 ? 1 : D/QK8_1];
|
||||||
|
if (Q_q8_1) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
if (j0 + nwarps > ncols && j >= ncols) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reuse KQ as temporary storage for converting Q to q8_1:
|
||||||
|
int * tmp_q_i32 = (int *) &KQ[j*D];
|
||||||
|
float2 * tmp_q_ds = (float2 *) (tmp_q_i32 + D/sizeof(int));
|
||||||
|
|
||||||
|
// Set memory to zero if out of bounds:
|
||||||
|
if (ncols > 2 && ic0 + j >= ne01) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/sizeof(int); i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
tmp_q_i32[i] = 0;
|
||||||
|
}
|
||||||
|
if (threadIdx.x < D/QK8_1) {
|
||||||
|
tmp_q_ds[threadIdx.x] = make_float2(0.0f, 0.0f);
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float * Q_f = (const float *) (Q + j*nb01);
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/sizeof(int); i0 += WARP_SIZE) {
|
||||||
|
quantize_q8_1_to_shared<float2>(Q_f + 4*i0, scale, tmp_q_i32, tmp_q_ds);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
int * tmp_q_i32 = (int *) &KQ[j*D];
|
||||||
|
float2 * tmp_q_ds = (float2 *) (tmp_q_i32 + D/sizeof(int));
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/sizeof(int); i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
Q_i32[j][i0/WARP_SIZE] = tmp_q_i32[i];
|
||||||
|
Q_ds[j][i0/WARP_SIZE] = tmp_q_ds[i/QI8_1];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
} else {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
const float2 * Q_f2_j = (const float2 *) (Q + j*nb01);
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
|
||||||
|
Q_f2[j][i0/WARP_SIZE] = ncols <= 2 || ic0 + j < ne01 ? Q_f2_j[i] : make_float2(0.0f, 0.0f);
|
||||||
|
Q_f2[j][i0/WARP_SIZE].x *= scale;
|
||||||
|
Q_f2[j][i0/WARP_SIZE].y *= scale;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
float VKQ[ncols] = {0.0f};
|
||||||
|
|
||||||
|
const int k_start = parallel_blocks == 1 ? 0 : ip*D;
|
||||||
|
for (int k_VKQ_0 = k_start; k_VKQ_0 < ne11; k_VKQ_0 += parallel_blocks*D) {
|
||||||
|
// Calculate KQ tile and keep track of new maximum KQ values:
|
||||||
|
|
||||||
|
float kqmax_new_arr[ncols];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
kqmax_new_arr[j] = kqmax[j];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < D; i_KQ_0 += nwarps) {
|
||||||
|
const int i_KQ = i_KQ_0 + threadIdx.y;
|
||||||
|
|
||||||
|
if ((i_KQ_0 + nwarps > D && i_KQ >= D) || (FATTN_KQ_STRIDE % D != 0 && k_VKQ_0 + i_KQ >= ne11)) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
float sum = vec_dot_KQ(K + (k_VKQ_0 + i_KQ)*nb11, Q_f2[j], Q_i32[j], Q_ds[j]);
|
||||||
|
sum = warp_reduce_sum(sum);
|
||||||
|
|
||||||
|
if (use_logit_softcap) {
|
||||||
|
sum = logit_softcap*tanhf(sum);
|
||||||
|
}
|
||||||
|
|
||||||
|
sum += mask ? slope*__half2float(maskh[j*ne11 + k_VKQ_0 + i_KQ]) : 0.0f;
|
||||||
|
|
||||||
|
kqmax_new_arr[j] = fmaxf(kqmax_new_arr[j], sum);
|
||||||
|
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
KQ[j*D + i_KQ] = sum;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
float kqmax_new_j = kqmax_new_arr[j];
|
||||||
|
|
||||||
|
kqmax_new_j = warp_reduce_max(kqmax_new_j);
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
kqmax_shared[j][threadIdx.y] = kqmax_new_j;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
float kqmax_new_j = kqmax_shared[j][threadIdx.x];
|
||||||
|
kqmax_new_j = warp_reduce_max(kqmax_new_j);
|
||||||
|
|
||||||
|
const float KQ_max_scale = expf(kqmax[j] - kqmax_new_j);
|
||||||
|
kqmax[j] = kqmax_new_j;
|
||||||
|
|
||||||
|
const float val = expf(KQ[j*D + tid] - kqmax[j]);
|
||||||
|
kqsum[j] = kqsum[j]*KQ_max_scale + val;
|
||||||
|
KQ[j*D + tid] = val;
|
||||||
|
|
||||||
|
VKQ[j] *= KQ_max_scale;
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k = 0; k < D; ++k) {
|
||||||
|
if (FATTN_KQ_STRIDE % D != 0 && k_VKQ_0 + k >= ne11) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float V_ki = dequantize_1_v(V + (k_VKQ_0 + k)*nb21, tid);
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
VKQ[j] += V_ki*KQ[j*D + k];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols; ++j) {
|
||||||
|
kqsum[j] = warp_reduce_sum(kqsum[j]);
|
||||||
|
if (threadIdx.x == 0) {
|
||||||
|
kqsum_shared[j][threadIdx.y] = kqsum[j];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j_VKQ = 0; j_VKQ < ncols; ++j_VKQ) {
|
||||||
|
if (ncols > 2 && ic0 + j_VKQ >= ne01) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
kqsum[j_VKQ] = kqsum_shared[j_VKQ][threadIdx.x];
|
||||||
|
kqsum[j_VKQ] = warp_reduce_sum(kqsum[j_VKQ]);
|
||||||
|
|
||||||
|
float dst_val = VKQ[j_VKQ];
|
||||||
|
if (parallel_blocks == 1) {
|
||||||
|
dst_val /= kqsum[j_VKQ];
|
||||||
|
}
|
||||||
|
const int j_dst = (ic0 + j_VKQ)*parallel_blocks + ip;
|
||||||
|
dst[j_dst*D*gridDim.y + D*blockIdx.y + tid] = dst_val;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parallel_blocks != 1 && tid < ncols && (ncols <= 2 || ic0 + tid < ne01)) {
|
||||||
|
dst_meta[(ic0 + tid)*gridDim.y*parallel_blocks + blockIdx.y*parallel_blocks + ip] = make_float2(kqmax[tid], kqsum[tid]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D, int cols_per_block, int parallel_blocks, ggml_type type_K, ggml_type type_V, bool use_logit_softcap>
|
||||||
|
void ggml_cuda_flash_attn_ext_vec_f32_case_impl(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
constexpr int nwarps = D/WARP_SIZE;
|
||||||
|
fattn_kernel_t fattn_kernel = flash_attn_vec_ext_f32<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>;
|
||||||
|
constexpr bool need_f16_K = D != 128;
|
||||||
|
constexpr bool need_f16_V = D != 128 && D != 64;
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, need_f16_K, need_f16_V);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int D, ggml_type type_K, ggml_type type_V>
|
||||||
|
void ggml_cuda_flash_attn_ext_vec_f32_case(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
const ggml_tensor * K = dst->src[1];
|
||||||
|
const ggml_tensor * V = dst->src[2];
|
||||||
|
|
||||||
|
GGML_ASSERT(K->type == type_K);
|
||||||
|
GGML_ASSERT(V->type == type_V);
|
||||||
|
|
||||||
|
float logit_softcap;
|
||||||
|
memcpy(&logit_softcap, (const float *) KQV->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
if (Q->ne[1] == 1) {
|
||||||
|
constexpr int cols_per_block = 1;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] == 2) {
|
||||||
|
constexpr int cols_per_block = 2;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 4) {
|
||||||
|
constexpr int cols_per_block = 4;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 8) {
|
||||||
|
constexpr int cols_per_block = 8;
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr int cols_per_block = 8;
|
||||||
|
constexpr int parallel_blocks = 1;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case_impl<D, cols_per_block, parallel_blocks, type_K, type_V, use_logit_softcap>(ctx, dst);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#define DECL_FATTN_VEC_F32_CASE(D, type_K, type_V) \
|
||||||
|
template void ggml_cuda_flash_attn_ext_vec_f32_case \
|
||||||
|
<D, type_K, type_V>(ggml_backend_cuda_context & ctx, ggml_tensor * dst) \
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_F16);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_0);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_1);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_0);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_1);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_1);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q8_0);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q8_0);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_F16);
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16);
|
||||||
|
|
||||||
|
extern DECL_FATTN_VEC_F32_CASE(256, GGML_TYPE_F16, GGML_TYPE_F16);
|
569
llama/ggml-cuda/fattn-wmma-f16.cuh
Normal file
569
llama/ggml-cuda/fattn-wmma-f16.cuh
Normal file
|
@ -0,0 +1,569 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "fattn-common.cuh"
|
||||||
|
|
||||||
|
#ifdef FP16_MMA_AVAILABLE
|
||||||
|
#include <mma.h>
|
||||||
|
#endif // FP16_MMA_AVAILABLE
|
||||||
|
|
||||||
|
// D == head size, VKQ_stride == num VKQ rows calculated in parallel:
|
||||||
|
template<int D, int ncols, int nwarps, int VKQ_stride, int parallel_blocks, typename KQ_acc_t, bool use_logit_softcap>
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
__launch_bounds__(nwarps*WARP_SIZE, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void flash_attn_ext_f16(
|
||||||
|
const char * __restrict__ Q,
|
||||||
|
const char * __restrict__ K,
|
||||||
|
const char * __restrict__ V,
|
||||||
|
const char * __restrict__ mask,
|
||||||
|
float * __restrict__ dst,
|
||||||
|
float2 * __restrict__ dst_meta,
|
||||||
|
const float scale,
|
||||||
|
const float max_bias,
|
||||||
|
const float m0,
|
||||||
|
const float m1,
|
||||||
|
const uint32_t n_head_log2,
|
||||||
|
const float logit_softcap,
|
||||||
|
const int ne00,
|
||||||
|
const int ne01,
|
||||||
|
const int ne02,
|
||||||
|
const int ne03,
|
||||||
|
const int ne10,
|
||||||
|
const int ne11,
|
||||||
|
const int ne12,
|
||||||
|
const int ne13,
|
||||||
|
const int ne31,
|
||||||
|
const int nb31,
|
||||||
|
const int nb01,
|
||||||
|
const int nb02,
|
||||||
|
const int nb03,
|
||||||
|
const int nb11,
|
||||||
|
const int nb12,
|
||||||
|
const int nb13,
|
||||||
|
const int nb21,
|
||||||
|
const int nb22,
|
||||||
|
const int nb23,
|
||||||
|
const int ne0,
|
||||||
|
const int ne1,
|
||||||
|
const int ne2,
|
||||||
|
const int ne3) {
|
||||||
|
#ifdef FP16_MMA_AVAILABLE
|
||||||
|
// Skip unused kernel variants for faster compilation:
|
||||||
|
if (use_logit_softcap && !(D == 128 || D == 256)) {
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
//In this kernel Q, K, V are matrices while i, j, k are matrix indices.
|
||||||
|
|
||||||
|
const int ic0 = ncols*(blockIdx.x / parallel_blocks); // Index of the first Q/QKV column to work on.
|
||||||
|
const int ip = blockIdx.x % parallel_blocks; // Index in group of blocks running for the same column in parallel.
|
||||||
|
|
||||||
|
static_assert(D <= FATTN_KQ_STRIDE, "D must be <= FATTN_KQ_STRIDE.");
|
||||||
|
static_assert(ncols == 8 || ncols % 16 == 0, "ncols must be 8 or a multiple of 16.");
|
||||||
|
constexpr int frag_m = ncols == 8 ? 32 : 16;
|
||||||
|
constexpr int frag_n = ncols == 8 ? 8 : 16;
|
||||||
|
static_assert(D % frag_m == 0, "If ncols == 8 then D % frag_m must be 0.");
|
||||||
|
typedef nvcuda::wmma::fragment<nvcuda::wmma::matrix_a, frag_m, frag_n, 16, half, nvcuda::wmma::row_major> frag_a_K;
|
||||||
|
typedef nvcuda::wmma::fragment<nvcuda::wmma::matrix_a, frag_m, frag_n, 16, half, nvcuda::wmma::col_major> frag_a_V;
|
||||||
|
typedef nvcuda::wmma::fragment<nvcuda::wmma::matrix_b, frag_m, frag_n, 16, half, nvcuda::wmma::col_major> frag_b;
|
||||||
|
typedef nvcuda::wmma::fragment<nvcuda::wmma::accumulator, frag_m, frag_n, 16, KQ_acc_t> frag_c_KQ;
|
||||||
|
typedef nvcuda::wmma::fragment<nvcuda::wmma::accumulator, frag_m, frag_n, 16, half> frag_c_VKQ;
|
||||||
|
|
||||||
|
constexpr int KQ_stride_tc = nwarps*frag_m; // Number of KQ rows calculated in parallel.
|
||||||
|
constexpr int VKQ_ratio = KQ_stride_tc/VKQ_stride; // Number of parallel VKQ accumulators needed to keep all warps busy.
|
||||||
|
static_assert(VKQ_ratio <= nwarps, "VKQ_ratio must be <= nwarps.");
|
||||||
|
|
||||||
|
// Pad internal representation of KQ, KQV to reduce shared memory bank conflicts:
|
||||||
|
constexpr int D_padded = D + 8;
|
||||||
|
constexpr int kqs_padded = FATTN_KQ_STRIDE + 8;
|
||||||
|
constexpr int kqar = sizeof(KQ_acc_t)/sizeof(half);
|
||||||
|
|
||||||
|
const int gqa_ratio = ne02 / ne12; // With grouped query attention there are > 1 Q matrices per K, V matrix.
|
||||||
|
const float * Q_f = (const float *) (Q + nb02* blockIdx.y + nb01*ic0);
|
||||||
|
const half * K_h = (const half *) (K + nb12*(blockIdx.y / gqa_ratio));
|
||||||
|
const half * V_h = (const half *) (V + nb12*(blockIdx.y / gqa_ratio)); // K and V have same shape
|
||||||
|
const half * maskh = (const half *) mask + (nb31/sizeof(half))* ic0;
|
||||||
|
const half2 * mask2 = (const half2 *) mask + (nb31/sizeof(half))*(ic0/2);
|
||||||
|
|
||||||
|
const int stride_Q = nb01 / sizeof(float);
|
||||||
|
const int stride_KV = nb11 / sizeof(half);
|
||||||
|
|
||||||
|
const float slopef = get_alibi_slope(max_bias, blockIdx.y, n_head_log2, m0, m1);
|
||||||
|
const half slopeh = __float2half(slopef);
|
||||||
|
const half2 slope2 = make_half2(slopef, slopef);
|
||||||
|
|
||||||
|
const half2 logit_softcap_2 = make_half2(logit_softcap, logit_softcap);
|
||||||
|
|
||||||
|
frag_b Q_b[D/16][ncols/frag_n];
|
||||||
|
|
||||||
|
// A single buffer for temporarily holding tiles of KQ and VKQ parts:
|
||||||
|
constexpr int mem_KQ = ncols*kqs_padded*kqar;
|
||||||
|
constexpr int mem_VKQ_parts = VKQ_ratio*ncols*D_padded;
|
||||||
|
__shared__ half KQ[mem_KQ >= mem_VKQ_parts ? mem_KQ : mem_VKQ_parts];
|
||||||
|
float * KQ_f = (float *) KQ;
|
||||||
|
half2 * KQ2 = (half2 *) KQ;
|
||||||
|
|
||||||
|
float KQ_rowsum_f[ncols/nwarps] = {0.0f};
|
||||||
|
float KQ_max_f[ncols/nwarps];
|
||||||
|
float KQ_max_scale_f[ncols/nwarps] = {0.0f};
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/nwarps; ++j) {
|
||||||
|
KQ_max_f[j] = -FLT_MAX/2.0f;
|
||||||
|
}
|
||||||
|
|
||||||
|
half2 KQ_rowsum_h2[ncols/nwarps] = {{0.0f, 0.0f}};
|
||||||
|
half2 KQ_max_h2[ncols/nwarps];
|
||||||
|
half2 KQ_max_scale_h2[ncols/nwarps] = {{0.0f, 0.0f}};
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/nwarps; ++j) {
|
||||||
|
KQ_max_h2[j] = make_half2(-HALF_MAX_HALF, -HALF_MAX_HALF);
|
||||||
|
}
|
||||||
|
|
||||||
|
__shared__ half VKQ[ncols*D_padded]; // Accumulator for final VKQ slice.
|
||||||
|
half2 * VKQ2 = (half2 *) VKQ;
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
if (i0 + WARP_SIZE > D/2 && i >= D/2) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
VKQ2[j*(D_padded/2) + i] = make_half2(0.0f, 0.0f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert Q to half and apply scale, temporarily store in KQ:
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
if (i0 + WARP_SIZE > D && i >= D) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
KQ[j*D_padded + i] = ic0 + j < ne01 ? Q_f[j*stride_Q + i] * scale : 0.0f;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
// Load Q into tensor core fragments/registers since it will be used frequently:
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D; i0 += 16) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += frag_n) {
|
||||||
|
nvcuda::wmma::load_matrix_sync(Q_b[i0/16][j0/frag_n], KQ + j0*D_padded + i0, D_padded);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
// Iterate over ne11 == previous tokens:
|
||||||
|
for (int k_VKQ_0 = ip*FATTN_KQ_STRIDE; k_VKQ_0 < ne11; k_VKQ_0 += parallel_blocks*FATTN_KQ_STRIDE) {
|
||||||
|
// Calculate tile of KQ:
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < FATTN_KQ_STRIDE; i_KQ_0 += KQ_stride_tc) {
|
||||||
|
frag_c_KQ KQ_c[ncols/frag_n];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/frag_n; ++j) {
|
||||||
|
nvcuda::wmma::fill_fragment(KQ_c[j], 0.0f);
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int k_KQ_0 = 0; k_KQ_0 < D; k_KQ_0 += 16) {
|
||||||
|
frag_a_K K_a;
|
||||||
|
nvcuda::wmma::load_matrix_sync(K_a, K_h + (k_VKQ_0 + i_KQ_0 + frag_m*threadIdx.y)*stride_KV + k_KQ_0, stride_KV);
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/frag_n; ++j) {
|
||||||
|
nvcuda::wmma::mma_sync(KQ_c[j], K_a, Q_b[k_KQ_0/16][j], KQ_c[j]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += frag_n) {
|
||||||
|
nvcuda::wmma::store_matrix_sync((KQ_acc_t *) KQ + j0*kqs_padded + i_KQ_0 + frag_m*threadIdx.y, KQ_c[j0/frag_n], kqs_padded, nvcuda::wmma::mem_col_major);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
// Calculate softmax for each KQ column using the current max. value.
|
||||||
|
// The divisor is stored in KQ_rowsum and will be applied at the end.
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
if (std::is_same<KQ_acc_t, float>::value) {
|
||||||
|
float KQ_f_tmp[FATTN_KQ_STRIDE / WARP_SIZE];
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE; k0 += WARP_SIZE) {
|
||||||
|
const int k = k0 + threadIdx.x;
|
||||||
|
|
||||||
|
KQ_f_tmp[k0/WARP_SIZE] = KQ_f[j*kqs_padded + k];
|
||||||
|
|
||||||
|
if (use_logit_softcap) {
|
||||||
|
KQ_f_tmp[k0/WARP_SIZE] = logit_softcap*tanhf(KQ_f_tmp[k0/WARP_SIZE]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
float KQ_max_new = KQ_max_f[j0/nwarps];
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE; k0 += WARP_SIZE) {
|
||||||
|
const int k = k0 + threadIdx.x;
|
||||||
|
|
||||||
|
KQ_f_tmp[k0/WARP_SIZE] += mask ? __half2float(slopeh*maskh[j*(nb31/sizeof(half)) + k_VKQ_0 + k]) : 0.0f;
|
||||||
|
KQ_max_new = max(KQ_max_new, KQ_f_tmp[k0/WARP_SIZE]);
|
||||||
|
}
|
||||||
|
KQ_max_new = warp_reduce_max(KQ_max_new);
|
||||||
|
|
||||||
|
const float diff = KQ_max_f[j0/nwarps] - KQ_max_new;
|
||||||
|
KQ_max_scale_f[j0/nwarps] = expf(diff);
|
||||||
|
if (diff <= SOFTMAX_FTZ_THRESHOLD) {
|
||||||
|
KQ_max_scale_f[j0/nwarps] = 0.0f;
|
||||||
|
}
|
||||||
|
KQ_max_f[j0/nwarps] = KQ_max_new;
|
||||||
|
|
||||||
|
float KQ_rowsum_add = 0.0f;
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE; k0 += WARP_SIZE) {
|
||||||
|
const int k = k0 + threadIdx.x;
|
||||||
|
|
||||||
|
const float diff = KQ_f_tmp[k0/WARP_SIZE] - KQ_max_f[j0/nwarps];
|
||||||
|
KQ_f_tmp[k0/WARP_SIZE] = expf(diff);
|
||||||
|
if (diff <= SOFTMAX_FTZ_THRESHOLD) {
|
||||||
|
KQ_f_tmp[k0/WARP_SIZE] = 0.0f;
|
||||||
|
}
|
||||||
|
KQ_rowsum_add += KQ_f_tmp[k0/WARP_SIZE];
|
||||||
|
KQ[j*(kqar*kqs_padded) + k] = KQ_f_tmp[k0/WARP_SIZE];
|
||||||
|
}
|
||||||
|
KQ_rowsum_add = warp_reduce_sum(KQ_rowsum_add);
|
||||||
|
|
||||||
|
// Scale previous KQ_rowsum to account for a potential increase in KQ_max:
|
||||||
|
KQ_rowsum_f[j0/nwarps] = KQ_max_scale_f[j0/nwarps]*KQ_rowsum_f[j0/nwarps] + KQ_rowsum_add;
|
||||||
|
} else {
|
||||||
|
half2 KQ2_tmp[FATTN_KQ_STRIDE/(2*WARP_SIZE)];
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE/2; k0 += WARP_SIZE) {
|
||||||
|
const int k = k0 + threadIdx.x;
|
||||||
|
|
||||||
|
KQ2_tmp[k0/WARP_SIZE] = KQ2[j*(kqs_padded/2) + k];
|
||||||
|
|
||||||
|
if (use_logit_softcap) {
|
||||||
|
// There is no dedicated tangens hyperbolicus function for half2.
|
||||||
|
KQ2_tmp[k0/WARP_SIZE] = h2exp(KQ2_tmp[k0/WARP_SIZE]*make_half2(2.0f, 2.0f));
|
||||||
|
KQ2_tmp[k0/WARP_SIZE] = (KQ2_tmp[k0/WARP_SIZE] - make_half2(1.0f, 1.0f))
|
||||||
|
/(KQ2_tmp[k0/WARP_SIZE] + make_half2(1.0f, 1.0f));
|
||||||
|
|
||||||
|
KQ2_tmp[k0/WARP_SIZE] *= logit_softcap_2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
half2 KQ_max_new = KQ_max_h2[j0/nwarps];
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE/2; k0 += WARP_SIZE) {
|
||||||
|
const int k = k0 + threadIdx.x;
|
||||||
|
|
||||||
|
KQ2_tmp[k0/WARP_SIZE] += mask ? slope2*mask2[(j*ne11 + k_VKQ_0)/2 + k] : make_half2(0.0f, 0.0f);
|
||||||
|
KQ_max_new = ggml_cuda_hmax2(KQ_max_new, KQ2_tmp[k0/WARP_SIZE]);
|
||||||
|
}
|
||||||
|
KQ_max_new = __half2half2(warp_reduce_max(ggml_cuda_hmax(__low2half(KQ_max_new), __high2half(KQ_max_new))));
|
||||||
|
const half2 diff = KQ_max_h2[j0/nwarps] - KQ_max_new;
|
||||||
|
KQ_max_scale_h2[j0/nwarps] = h2exp(diff);
|
||||||
|
const uint32_t ftz_mask = __hgt2_mask(diff, make_half2(SOFTMAX_FTZ_THRESHOLD, SOFTMAX_FTZ_THRESHOLD));
|
||||||
|
*((uint32_t *) &KQ_max_scale_h2[j0/nwarps]) &= ftz_mask;
|
||||||
|
KQ_max_h2[j0/nwarps] = KQ_max_new;
|
||||||
|
|
||||||
|
half2 KQ_rowsum_add = make_half2(0.0f, 0.0f);
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE/2; k0 += WARP_SIZE) {
|
||||||
|
const int k = k0 + threadIdx.x;
|
||||||
|
|
||||||
|
const half2 diff = KQ2_tmp[k0/WARP_SIZE] - KQ_max_h2[j0/nwarps];
|
||||||
|
KQ2_tmp[k0/WARP_SIZE] = h2exp(diff);
|
||||||
|
const uint32_t ftz_mask = __hgt2_mask(diff, make_half2(SOFTMAX_FTZ_THRESHOLD, SOFTMAX_FTZ_THRESHOLD));
|
||||||
|
*((uint32_t *) &KQ2_tmp[k0/WARP_SIZE]) &= ftz_mask;
|
||||||
|
KQ_rowsum_add += KQ2_tmp[k0/WARP_SIZE];
|
||||||
|
KQ2[j*(kqs_padded/2) + k] = KQ2_tmp[k0/WARP_SIZE];
|
||||||
|
}
|
||||||
|
KQ_rowsum_add = warp_reduce_sum(KQ_rowsum_add);
|
||||||
|
|
||||||
|
// Scale previous KQ_rowsum to account for a potential increase in KQ_max:
|
||||||
|
KQ_rowsum_h2[j0/nwarps] = KQ_max_scale_h2[j0/nwarps]*KQ_rowsum_h2[j0/nwarps] + KQ_rowsum_add;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
frag_b KQ_b[FATTN_KQ_STRIDE/(VKQ_ratio*16)][ncols/frag_n];
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += frag_n) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE; k0 += VKQ_ratio*16) {
|
||||||
|
const int k = k0 + (threadIdx.y % VKQ_ratio)*16;
|
||||||
|
nvcuda::wmma::load_matrix_sync(
|
||||||
|
KQ_b[k0/(VKQ_ratio*16)][j0/frag_n],
|
||||||
|
KQ + j0*(kqar*kqs_padded) + k,
|
||||||
|
kqar*kqs_padded);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
frag_c_VKQ VKQ_c[D/VKQ_stride][ncols/frag_n];
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_VKQ_0 = 0; i_VKQ_0 < D; i_VKQ_0 += VKQ_stride) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/frag_n; ++j) {
|
||||||
|
nvcuda::wmma::fill_fragment(VKQ_c[i_VKQ_0/VKQ_stride][j], 0.0f);
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int k0 = 0; k0 < FATTN_KQ_STRIDE; k0 += VKQ_ratio*16) {
|
||||||
|
const int k = k0 + (threadIdx.y % VKQ_ratio)*16;
|
||||||
|
|
||||||
|
frag_a_V v_a;
|
||||||
|
nvcuda::wmma::load_matrix_sync(v_a, V_h + (k_VKQ_0 + k)*stride_KV + i_VKQ_0 + frag_m*(threadIdx.y/VKQ_ratio), stride_KV);
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols/frag_n; ++j) {
|
||||||
|
nvcuda::wmma::mma_sync(VKQ_c[i_VKQ_0/VKQ_stride][j], v_a, KQ_b[k0/(VKQ_ratio*16)][j], VKQ_c[i_VKQ_0/VKQ_stride][j]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
const int offset_k = (threadIdx.y % VKQ_ratio) * (ncols*D_padded);
|
||||||
|
#pragma unroll
|
||||||
|
for (int i_KQ_0 = 0; i_KQ_0 < D; i_KQ_0 += VKQ_stride) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += frag_n) {
|
||||||
|
nvcuda::wmma::store_matrix_sync(
|
||||||
|
KQ + offset_k + j0*D_padded + i_KQ_0 + frag_m*(threadIdx.y/VKQ_ratio),
|
||||||
|
VKQ_c[i_KQ_0/VKQ_stride][j0/frag_n],
|
||||||
|
D_padded, nvcuda::wmma::mem_col_major);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j = j0 + threadIdx.y;
|
||||||
|
|
||||||
|
half2 VKQ_scale;
|
||||||
|
if (std::is_same<KQ_acc_t, float>::value) {
|
||||||
|
VKQ_scale = make_half2(KQ_max_scale_f[j0/nwarps], KQ_max_scale_f[j0/nwarps]);
|
||||||
|
} else {
|
||||||
|
VKQ_scale = KQ_max_scale_h2[j0/nwarps];
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D/2; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
if (i0 + WARP_SIZE > D/2 && i >= D/2) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
half2 VKQ_add = make_half2(0.0f, 0.0f);
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < VKQ_ratio; ++l) {
|
||||||
|
VKQ_add += KQ2[l*(ncols*D_padded/2) + j*(D_padded/2) + i];
|
||||||
|
}
|
||||||
|
VKQ2[j*(D_padded/2) + i] = VKQ_scale*VKQ2[j*(D_padded/2) + i] + VKQ_add;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__syncthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j0 = 0; j0 < ncols; j0 += nwarps) {
|
||||||
|
const int j_VKQ = j0 + threadIdx.y;
|
||||||
|
if (ic0 + j_VKQ >= ne01) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const int j_dst = (ic0 + j_VKQ)*parallel_blocks + ip;
|
||||||
|
|
||||||
|
float KQ_rowsum_j;
|
||||||
|
if (std::is_same<KQ_acc_t, float>::value) {
|
||||||
|
KQ_rowsum_j = KQ_rowsum_f[j0/nwarps];
|
||||||
|
} else {
|
||||||
|
KQ_rowsum_j = __low2float(KQ_rowsum_h2[j0/nwarps]) + __high2float(KQ_rowsum_h2[j0/nwarps]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int i0 = 0; i0 < D; i0 += WARP_SIZE) {
|
||||||
|
const int i = i0 + threadIdx.x;
|
||||||
|
if (i0 + WARP_SIZE > D && i >= D) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
float dst_val = VKQ[j_VKQ*D_padded + i];
|
||||||
|
if (parallel_blocks == 1) {
|
||||||
|
dst_val /= KQ_rowsum_j;
|
||||||
|
}
|
||||||
|
dst[j_dst*gridDim.y*D + blockIdx.y*D + i] = dst_val;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parallel_blocks == 1 || threadIdx.x != 0) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
float2 dst_meta_val;
|
||||||
|
if (std::is_same<KQ_acc_t, float>::value) {
|
||||||
|
dst_meta_val.x = KQ_max_f[j0/nwarps];
|
||||||
|
} else {
|
||||||
|
dst_meta_val.x = __low2float(KQ_max_h2[j0/nwarps]);
|
||||||
|
}
|
||||||
|
dst_meta_val.y = KQ_rowsum_j;
|
||||||
|
dst_meta[(ic0 + j_VKQ)*gridDim.y*parallel_blocks + blockIdx.y*parallel_blocks + ip] = dst_meta_val;
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // FP16_MMA_AVAILABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr int get_max_power_of_2(int x) {
|
||||||
|
return x % 2 == 0 ? 2*get_max_power_of_2(x/2) : 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static_assert(get_max_power_of_2(1) == 1, "Test failed.");
|
||||||
|
static_assert(get_max_power_of_2(2) == 2, "Test failed.");
|
||||||
|
static_assert(get_max_power_of_2(4) == 4, "Test failed.");
|
||||||
|
static_assert(get_max_power_of_2(6) == 2, "Test failed.");
|
||||||
|
|
||||||
|
// Number of VKQ rows calculated in parallel:
|
||||||
|
constexpr int get_VKQ_stride(int D, int nwarps, int frag_m) {
|
||||||
|
return (get_max_power_of_2(D/frag_m) < nwarps ? get_max_power_of_2(D/frag_m) : nwarps)*frag_m;
|
||||||
|
}
|
||||||
|
|
||||||
|
static_assert(get_VKQ_stride(128, 1, 32) == 32, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride(128, 2, 32) == 64, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride(128, 4, 32) == 128, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride( 64, 1, 32) == 32, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride( 64, 2, 32) == 64, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride( 64, 4, 32) == 64, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride( 80, 1, 16) == 16, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride( 80, 2, 16) == 16, "Test failed.");
|
||||||
|
static_assert(get_VKQ_stride( 80, 4, 16) == 16, "Test failed.");
|
||||||
|
|
||||||
|
template <int D, int cols_per_block, typename KQ_acc_t>
|
||||||
|
void ggml_cuda_flash_attn_ext_wmma_f16_case(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
|
||||||
|
constexpr int nwarps = 4;
|
||||||
|
|
||||||
|
constexpr int frag_m = cols_per_block == 8 && D % 32 == 0 ? 32 : 16;
|
||||||
|
const int blocks_num_pb1 = ((Q->ne[1] + cols_per_block - 1) / cols_per_block)*Q->ne[2]*Q->ne[3];
|
||||||
|
const int nsm = ggml_cuda_info().devices[ggml_cuda_get_device()].nsm;
|
||||||
|
|
||||||
|
float logit_softcap;
|
||||||
|
memcpy(&logit_softcap, (const float *) KQV->op_params + 2, sizeof(float));
|
||||||
|
|
||||||
|
if (4*blocks_num_pb1 < 2*nsm) {
|
||||||
|
constexpr int parallel_blocks = 4;
|
||||||
|
fattn_kernel_t fattn_kernel;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
fattn_kernel = flash_attn_ext_f16<
|
||||||
|
D, cols_per_block, nwarps, get_VKQ_stride(D, nwarps, frag_m), parallel_blocks, KQ_acc_t, use_logit_softcap>;
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
fattn_kernel = flash_attn_ext_f16<
|
||||||
|
D, cols_per_block, nwarps, get_VKQ_stride(D, nwarps, frag_m), parallel_blocks, KQ_acc_t, use_logit_softcap>;
|
||||||
|
}
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (2*blocks_num_pb1 < 2*nsm) {
|
||||||
|
constexpr int parallel_blocks = 2;
|
||||||
|
fattn_kernel_t fattn_kernel;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
fattn_kernel = flash_attn_ext_f16<
|
||||||
|
D, cols_per_block, nwarps, get_VKQ_stride(D, nwarps, frag_m), parallel_blocks, KQ_acc_t, use_logit_softcap>;
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
fattn_kernel = flash_attn_ext_f16<
|
||||||
|
D, cols_per_block, nwarps, get_VKQ_stride(D, nwarps, frag_m), parallel_blocks, KQ_acc_t, use_logit_softcap>;
|
||||||
|
}
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
constexpr int parallel_blocks = 1;
|
||||||
|
fattn_kernel_t fattn_kernel;
|
||||||
|
if (logit_softcap == 0.0f) {
|
||||||
|
constexpr bool use_logit_softcap = false;
|
||||||
|
fattn_kernel = flash_attn_ext_f16<
|
||||||
|
D, cols_per_block, nwarps, get_VKQ_stride(D, nwarps, frag_m), parallel_blocks, KQ_acc_t, use_logit_softcap>;
|
||||||
|
} else {
|
||||||
|
constexpr bool use_logit_softcap = true;
|
||||||
|
fattn_kernel = flash_attn_ext_f16<
|
||||||
|
D, cols_per_block, nwarps, get_VKQ_stride(D, nwarps, frag_m), parallel_blocks, KQ_acc_t, use_logit_softcap>;
|
||||||
|
}
|
||||||
|
launch_fattn<D, parallel_blocks>(ctx, dst, fattn_kernel, nwarps, cols_per_block, true, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
#define DECL_FATTN_WMMA_F16_CASE(D, cols_per_block, KQ_acc_t) \
|
||||||
|
template void ggml_cuda_flash_attn_ext_wmma_f16_case \
|
||||||
|
<D, cols_per_block, KQ_acc_t>(ggml_backend_cuda_context & ctx, ggml_tensor * dst) \
|
||||||
|
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 64, 16, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 80, 16, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 96, 16, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(112, 16, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(128, 16, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(256, 16, float);
|
||||||
|
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 64, 32, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 80, 32, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 96, 32, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(112, 32, float);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(128, 32, float);
|
||||||
|
// extern DECL_FATTN_WMMA_F16_CASE(256, 16, float);
|
||||||
|
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 64, 8, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 96, 8, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(128, 8, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(256, 8, half);
|
||||||
|
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 64, 16, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 80, 16, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 96, 16, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(112, 16, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(128, 16, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(256, 16, half);
|
||||||
|
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 64, 32, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 80, 32, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE( 96, 32, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(112, 32, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(128, 32, half);
|
||||||
|
extern DECL_FATTN_WMMA_F16_CASE(256, 16, half);
|
371
llama/ggml-cuda/fattn.cu
Normal file
371
llama/ggml-cuda/fattn.cu
Normal file
|
@ -0,0 +1,371 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "fattn-common.cuh"
|
||||||
|
#include "fattn-tile-f16.cuh"
|
||||||
|
#include "fattn-tile-f32.cuh"
|
||||||
|
#include "fattn-vec-f16.cuh"
|
||||||
|
#include "fattn-vec-f32.cuh"
|
||||||
|
#include "fattn-wmma-f16.cuh"
|
||||||
|
#include "fattn.cuh"
|
||||||
|
|
||||||
|
#include <cstdint>
|
||||||
|
|
||||||
|
static void ggml_cuda_flash_attn_ext_wmma_f16(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
|
||||||
|
const int32_t precision = KQV->op_params[3];
|
||||||
|
|
||||||
|
if (precision != GGML_PREC_DEFAULT) {
|
||||||
|
if (Q->ne[1] <= 32 || Q->ne[0] > 128) {
|
||||||
|
constexpr int cols_per_block = 16;
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 64, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 80:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 80, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 96:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 96, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 112:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<112, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 128:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<128, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 256:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<256, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
constexpr int cols_per_block = 32;
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 64, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 80:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 80, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 96:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 96, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 112:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<112, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 128:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<128, cols_per_block, float>(ctx, dst);
|
||||||
|
break;
|
||||||
|
// case 256:
|
||||||
|
// ggml_cuda_flash_attn_ext_wmma_f16_case<128, cols_per_block, float>(ctx, dst);
|
||||||
|
// break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 8 && Q->ne[0] % WARP_SIZE == 0) {
|
||||||
|
constexpr int cols_per_block = 8;
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 64, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 96:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 96, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 128:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<128, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 256:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<256, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] <= 32) {
|
||||||
|
constexpr int cols_per_block = 16;
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 64, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 80:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 80, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 96:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 96, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 112:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<112, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 128:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<128, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 256:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<256, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr int cols_per_block = 32;
|
||||||
|
switch (Q->ne[0]) {
|
||||||
|
case 64:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 64, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 80:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 80, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 96:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case< 96, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 112:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<112, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 128:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<128, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
case 256:
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16_case<256, cols_per_block, half>(ctx, dst);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#define FATTN_VEC_F16_CASE(D, type_K, type_V) \
|
||||||
|
if (Q->ne[0] == (D) && K->type == (type_K) && V->type == (type_V)) { \
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16_case<D, type_K, type_V>(ctx, dst); \
|
||||||
|
return; \
|
||||||
|
} \
|
||||||
|
|
||||||
|
static void ggml_cuda_flash_attn_ext_vec_f16(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_tensor * Q = dst->src[1];
|
||||||
|
ggml_tensor * K = dst->src[1];
|
||||||
|
ggml_tensor * V = dst->src[2];
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_FA_ALL_QUANTS
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_F16 )
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_1)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_1)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q8_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(256, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
#else
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q8_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F16_CASE( 64, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F16_CASE(256, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
#endif // GGML_CUDA_FA_ALL_QUANTS
|
||||||
|
|
||||||
|
on_no_fattn_vec_case(Q->ne[0]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#define FATTN_VEC_F32_CASE(D, type_K, type_V) \
|
||||||
|
if (Q->ne[0] == (D) && K->type == (type_K) && V->type == (type_V)) { \
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32_case<D, type_K, type_V>(ctx, dst); \
|
||||||
|
return; \
|
||||||
|
} \
|
||||||
|
|
||||||
|
static void ggml_cuda_flash_attn_ext_vec_f32(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
ggml_tensor * Q = dst->src[1];
|
||||||
|
ggml_tensor * K = dst->src[1];
|
||||||
|
ggml_tensor * V = dst->src[2];
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_FA_ALL_QUANTS
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q4_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_1)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q5_1)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_1)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q8_0)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q8_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_1, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_0, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q5_1, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(256, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
#else
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_Q8_0, GGML_TYPE_Q8_0)
|
||||||
|
|
||||||
|
FATTN_VEC_F32_CASE( 64, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
FATTN_VEC_F32_CASE(256, GGML_TYPE_F16, GGML_TYPE_F16)
|
||||||
|
#endif // GGML_CUDA_FA_ALL_QUANTS
|
||||||
|
|
||||||
|
on_no_fattn_vec_case(Q->ne[0]);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_flash_attn_ext(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * KQV = dst;
|
||||||
|
const ggml_tensor * Q = dst->src[0];
|
||||||
|
|
||||||
|
ggml_cuda_set_device(ctx.device);
|
||||||
|
const int cc = ggml_cuda_info().devices[ggml_cuda_get_device()].cc;
|
||||||
|
const int32_t precision = KQV->op_params[3];
|
||||||
|
|
||||||
|
// On AMD the tile kernels perform poorly, use the vec kernel instead:
|
||||||
|
if (cc >= CC_OFFSET_AMD) {
|
||||||
|
if (precision == GGML_PREC_DEFAULT && fast_fp16_available(cc)) {
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16(ctx, dst);
|
||||||
|
} else {
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!fast_fp16_available(cc)) {
|
||||||
|
if (Q->ne[1] <= 8) {
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32(ctx, dst);
|
||||||
|
} else {
|
||||||
|
ggml_cuda_flash_attn_ext_tile_f32(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!fp16_mma_available(cc)) {
|
||||||
|
if (Q->ne[1] <= 8) {
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16(ctx, dst);
|
||||||
|
} else {
|
||||||
|
ggml_cuda_flash_attn_ext_tile_f16(ctx, dst);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Q->ne[1] == 1 && Q->ne[0] % (2*WARP_SIZE) == 0) {
|
||||||
|
if (precision == GGML_PREC_DEFAULT) {
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f16(ctx, dst);
|
||||||
|
return;
|
||||||
|
} else if(Q->ne[0] <= 128) {
|
||||||
|
ggml_cuda_flash_attn_ext_vec_f32(ctx, dst);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ggml_cuda_flash_attn_ext_wmma_f16(ctx, dst);
|
||||||
|
}
|
29
llama/ggml-cuda/fattn.cuh
Normal file
29
llama/ggml-cuda/fattn.cuh
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_flash_attn_ext(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
203
llama/ggml-cuda/getrows.cu
Normal file
203
llama/ggml-cuda/getrows.cu
Normal file
|
@ -0,0 +1,203 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "getrows.cuh"
|
||||||
|
#include "dequantize.cuh"
|
||||||
|
|
||||||
|
template<int qk, int qr, dequantize_kernel_t dequantize_kernel, typename dst_t>
|
||||||
|
static __global__ void k_get_rows(
|
||||||
|
const void * src0, const int32_t * src1, dst_t * dst,
|
||||||
|
int64_t ne00, /*int64_t ne01, int64_t ne02, int64_t ne03,*/
|
||||||
|
/*int64_t ne10, int64_t ne11,*/ int64_t ne12, /*int64_t ne13,*/
|
||||||
|
/*size_t s0,*/ size_t s1, size_t s2, size_t s3,
|
||||||
|
/*size_t nb00,*/ size_t nb01, size_t nb02, size_t nb03,
|
||||||
|
size_t s10, size_t s11, size_t s12/*, size_t s13*/) {
|
||||||
|
|
||||||
|
const int i00 = (blockIdx.x*blockDim.x + threadIdx.x)*2;
|
||||||
|
const int i10 = blockDim.y*blockIdx.y + threadIdx.y;
|
||||||
|
const int i11 = (blockIdx.z*blockDim.z + threadIdx.z)/ne12;
|
||||||
|
const int i12 = (blockIdx.z*blockDim.z + threadIdx.z)%ne12;
|
||||||
|
|
||||||
|
if (i00 >= ne00) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i01 = src1[i10*s10 + i11*s11 + i12*s12];
|
||||||
|
|
||||||
|
dst_t * dst_row = dst + i10*s1 + i11*s2 + i12*s3;
|
||||||
|
const void * src0_row = (const char *)src0 + i01*nb01 + i11*nb02 + i12*nb03;
|
||||||
|
|
||||||
|
const int ib = i00/qk; // block index
|
||||||
|
const int iqs = (i00%qk)/qr; // quant index
|
||||||
|
const int iybs = i00 - i00%qk; // dst block start index
|
||||||
|
const int y_offset = qr == 1 ? 1 : qk/2;
|
||||||
|
|
||||||
|
// dequantize
|
||||||
|
dfloat2 v;
|
||||||
|
dequantize_kernel(src0_row, ib, iqs, v);
|
||||||
|
|
||||||
|
dst_row[iybs + iqs + 0] = v.x;
|
||||||
|
dst_row[iybs + iqs + y_offset] = v.y;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename src0_t, typename dst_t>
|
||||||
|
static __global__ void k_get_rows_float(
|
||||||
|
const src0_t * src0, const int32_t * src1, dst_t * dst,
|
||||||
|
int64_t ne00, /*int64_t ne01, int64_t ne02, int64_t ne03,*/
|
||||||
|
/*int64_t ne10, int64_t ne11,*/ int64_t ne12, /*int64_t ne13,*/
|
||||||
|
/*size_t s0,*/ size_t s1, size_t s2, size_t s3,
|
||||||
|
/*size_t nb00,*/ size_t nb01, size_t nb02, size_t nb03,
|
||||||
|
size_t s10, size_t s11, size_t s12/*, size_t s13*/) {
|
||||||
|
|
||||||
|
const int i00 = blockIdx.x*blockDim.x + threadIdx.x;
|
||||||
|
const int i10 = blockDim.y*blockIdx.y + threadIdx.y;
|
||||||
|
const int i11 = (blockIdx.z*blockDim.z + threadIdx.z)/ne12;
|
||||||
|
const int i12 = (blockIdx.z*blockDim.z + threadIdx.z)%ne12;
|
||||||
|
|
||||||
|
if (i00 >= ne00) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i01 = src1[i10*s10 + i11*s11 + i12*s12];
|
||||||
|
|
||||||
|
dst_t * dst_row = dst + i10*s1 + i11*s2 + i12*s3;
|
||||||
|
const src0_t * src0_row = (const src0_t *)((const char *)src0 + i01*nb01 + i11*nb02 + i12*nb03);
|
||||||
|
|
||||||
|
dst_row[i00] = src0_row[i00];
|
||||||
|
}
|
||||||
|
|
||||||
|
template<int qk, int qr, dequantize_kernel_t dq>
|
||||||
|
static void get_rows_cuda(const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst,
|
||||||
|
const void * src0_dd, const int32_t * src1_dd, float * dst_dd, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_TENSOR_BINARY_OP_LOCALS
|
||||||
|
|
||||||
|
const dim3 block_dims(CUDA_GET_ROWS_BLOCK_SIZE, 1, 1);
|
||||||
|
const int block_num_x = (ne00 + 2*CUDA_GET_ROWS_BLOCK_SIZE - 1) / (2*CUDA_GET_ROWS_BLOCK_SIZE);
|
||||||
|
const dim3 block_nums(block_num_x, ne10, ne11*ne12);
|
||||||
|
|
||||||
|
// strides in elements
|
||||||
|
//const size_t s0 = nb0 / ggml_element_size(dst);
|
||||||
|
const size_t s1 = nb1 / ggml_element_size(dst);
|
||||||
|
const size_t s2 = nb2 / ggml_element_size(dst);
|
||||||
|
const size_t s3 = nb3 / ggml_element_size(dst);
|
||||||
|
|
||||||
|
const size_t s10 = nb10 / ggml_element_size(src1);
|
||||||
|
const size_t s11 = nb11 / ggml_element_size(src1);
|
||||||
|
const size_t s12 = nb12 / ggml_element_size(src1);
|
||||||
|
//const size_t s13 = nb13 / ggml_element_size(src1);
|
||||||
|
|
||||||
|
GGML_ASSERT(ne00 % 2 == 0);
|
||||||
|
|
||||||
|
k_get_rows<qk, qr, dq><<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
src0_dd, src1_dd, dst_dd,
|
||||||
|
ne00, /*ne01, ne02, ne03,*/
|
||||||
|
/*ne10, ne11,*/ ne12, /*ne13,*/
|
||||||
|
/* s0,*/ s1, s2, s3,
|
||||||
|
/* nb00,*/ nb01, nb02, nb03,
|
||||||
|
s10, s11, s12/*, s13*/);
|
||||||
|
|
||||||
|
GGML_UNUSED(dst);
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename src0_t>
|
||||||
|
static void get_rows_cuda_float(const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst,
|
||||||
|
const src0_t * src0_dd, const int32_t * src1_dd, float * dst_dd, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_TENSOR_BINARY_OP_LOCALS
|
||||||
|
|
||||||
|
const dim3 block_dims(CUDA_GET_ROWS_BLOCK_SIZE, 1, 1);
|
||||||
|
const int block_num_x = (ne00 + CUDA_GET_ROWS_BLOCK_SIZE - 1) / CUDA_GET_ROWS_BLOCK_SIZE;
|
||||||
|
const dim3 block_nums(block_num_x, ne10, ne11*ne12);
|
||||||
|
|
||||||
|
// strides in elements
|
||||||
|
//const size_t s0 = nb0 / ggml_element_size(dst);
|
||||||
|
const size_t s1 = nb1 / ggml_element_size(dst);
|
||||||
|
const size_t s2 = nb2 / ggml_element_size(dst);
|
||||||
|
const size_t s3 = nb3 / ggml_element_size(dst);
|
||||||
|
|
||||||
|
const size_t s10 = nb10 / ggml_element_size(src1);
|
||||||
|
const size_t s11 = nb11 / ggml_element_size(src1);
|
||||||
|
const size_t s12 = nb12 / ggml_element_size(src1);
|
||||||
|
//const size_t s13 = nb13 / ggml_element_size(src1);
|
||||||
|
|
||||||
|
k_get_rows_float<<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
src0_dd, src1_dd, dst_dd,
|
||||||
|
ne00, /*ne01, ne02, ne03,*/
|
||||||
|
/*ne10, ne11,*/ ne12, /*ne13,*/
|
||||||
|
/* s0,*/ s1, s2, s3,
|
||||||
|
/* nb00,*/ nb01, nb02, nb03,
|
||||||
|
s10, s11, s12/*, s13*/);
|
||||||
|
|
||||||
|
GGML_UNUSED(dst);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_get_rows(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
const float * src1_d = (const float *)src1->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_I32);
|
||||||
|
GGML_ASSERT(dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->nb[0] == ggml_type_size(src0->type));
|
||||||
|
GGML_ASSERT(src1->nb[0] == ggml_type_size(src1->type));
|
||||||
|
GGML_ASSERT(dst->nb[0] == ggml_type_size(dst->type));
|
||||||
|
|
||||||
|
const int32_t * src1_i32 = (const int32_t *) src1_d;
|
||||||
|
|
||||||
|
switch (src0->type) {
|
||||||
|
case GGML_TYPE_F16:
|
||||||
|
get_rows_cuda_float(src0, src1, dst, (const half *)src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_F32:
|
||||||
|
get_rows_cuda_float(src0, src1, dst, src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
get_rows_cuda<QK4_0, QR4_0, dequantize_q4_0>(src0, src1, dst, src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
get_rows_cuda<QK4_1, QR4_1, dequantize_q4_1>(src0, src1, dst, src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
get_rows_cuda<QK5_0, QR5_0, dequantize_q5_0>(src0, src1, dst, src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
get_rows_cuda<QK5_1, QR5_1, dequantize_q5_1>(src0, src1, dst, src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
get_rows_cuda<QK8_0, QR8_0, dequantize_q8_0>(src0, src1, dst, src0_d, src1_i32, dst_d, stream);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
// TODO: k-quants
|
||||||
|
GGML_ABORT("%s: unsupported type: %s\n", __func__, ggml_type_name(src0->type));
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
31
llama/ggml-cuda/getrows.cuh
Normal file
31
llama/ggml-cuda/getrows.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_GET_ROWS_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_get_rows(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
130
llama/ggml-cuda/im2col.cu
Normal file
130
llama/ggml-cuda/im2col.cu
Normal file
|
@ -0,0 +1,130 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "im2col.cuh"
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __global__ void im2col_kernel(
|
||||||
|
const float * x, T * dst, int64_t batch_offset,
|
||||||
|
int64_t offset_delta, int64_t IC, int64_t IW, int64_t IH, int64_t OH, int64_t OW, int64_t KW, int64_t KH, int64_t pelements, int64_t CHW,
|
||||||
|
int s0, int s1, int p0, int p1, int d0, int d1) {
|
||||||
|
const int64_t i = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (i >= pelements) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int64_t ksize = OW * (KH > 1 ? KW : 1);
|
||||||
|
const int64_t kx = i / ksize;
|
||||||
|
const int64_t kd = kx * ksize;
|
||||||
|
const int64_t ky = (i - kd) / OW;
|
||||||
|
const int64_t ix = i % OW;
|
||||||
|
|
||||||
|
const int64_t oh = blockIdx.y;
|
||||||
|
const int64_t batch = blockIdx.z / IC;
|
||||||
|
const int64_t ic = blockIdx.z % IC;
|
||||||
|
|
||||||
|
const int64_t iiw = ix * s0 + kx * d0 - p0;
|
||||||
|
const int64_t iih = oh * s1 + ky * d1 - p1;
|
||||||
|
|
||||||
|
const int64_t offset_dst =
|
||||||
|
((batch * OH + oh) * OW + ix) * CHW +
|
||||||
|
(ic * (KW * KH) + ky * KW + kx);
|
||||||
|
|
||||||
|
if (iih < 0 || iih >= IH || iiw < 0 || iiw >= IW) {
|
||||||
|
dst[offset_dst] = 0.0f;
|
||||||
|
} else {
|
||||||
|
const int64_t offset_src = ic * offset_delta + batch * batch_offset;
|
||||||
|
dst[offset_dst] = x[offset_src + iih * IW + iiw];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static void im2col_cuda(const float * x, T* dst,
|
||||||
|
int64_t IW, int64_t IH, int64_t OW, int64_t OH, int64_t KW, int64_t KH, int64_t IC,
|
||||||
|
int64_t batch, int64_t batch_offset, int64_t offset_delta,
|
||||||
|
int s0,int s1,int p0,int p1,int d0,int d1, cudaStream_t stream) {
|
||||||
|
const int parallel_elements = OW * KW * KH;
|
||||||
|
const int num_blocks = (parallel_elements + CUDA_IM2COL_BLOCK_SIZE - 1) / CUDA_IM2COL_BLOCK_SIZE;
|
||||||
|
dim3 block_nums(num_blocks, OH, batch * IC);
|
||||||
|
im2col_kernel<<<block_nums, CUDA_IM2COL_BLOCK_SIZE, 0, stream>>>(x, dst, batch_offset, offset_delta, IC, IW, IH, OH, OW, KW, KH, parallel_elements, (IC * KH * KW), s0, s1, p0, p1, d0, d1);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void im2col_cuda_f16(const float * x, half * dst,
|
||||||
|
int64_t IW, int64_t IH, int64_t OW, int64_t OH, int64_t KW, int64_t KH, int64_t IC,
|
||||||
|
int64_t batch, int64_t batch_offset, int64_t offset_delta,
|
||||||
|
int s0,int s1,int p0,int p1,int d0,int d1, cudaStream_t stream) {
|
||||||
|
|
||||||
|
im2col_cuda<half>(x, dst, IW, IH, OW, OH, KW, KH, IC, batch, batch_offset, offset_delta, s0, s1, p0, p1, d0, d1, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void im2col_cuda_f32(const float * x, float * dst,
|
||||||
|
int64_t IW, int64_t IH, int64_t OW, int64_t OH, int64_t KW, int64_t KH, int64_t IC,
|
||||||
|
int64_t batch, int64_t batch_offset, int64_t offset_delta,
|
||||||
|
int s0,int s1,int p0,int p1,int d0,int d1, cudaStream_t stream) {
|
||||||
|
|
||||||
|
im2col_cuda<float>(x, dst, IW, IH, OW, OH, KW, KH, IC, batch, batch_offset, offset_delta, s0, s1, p0, p1, d0, d1, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_im2col(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
const float * src1_d = (const float *)src1->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F16);
|
||||||
|
GGML_ASSERT(src1->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F16 || dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
const int32_t s0 = ((const int32_t*)(dst->op_params))[0];
|
||||||
|
const int32_t s1 = ((const int32_t*)(dst->op_params))[1];
|
||||||
|
const int32_t p0 = ((const int32_t*)(dst->op_params))[2];
|
||||||
|
const int32_t p1 = ((const int32_t*)(dst->op_params))[3];
|
||||||
|
const int32_t d0 = ((const int32_t*)(dst->op_params))[4];
|
||||||
|
const int32_t d1 = ((const int32_t*)(dst->op_params))[5];
|
||||||
|
|
||||||
|
const bool is_2D = ((const int32_t*)(dst->op_params))[6] == 1;
|
||||||
|
|
||||||
|
const int64_t IC = src1->ne[is_2D ? 2 : 1];
|
||||||
|
const int64_t IH = is_2D ? src1->ne[1] : 1;
|
||||||
|
const int64_t IW = src1->ne[0];
|
||||||
|
|
||||||
|
const int64_t KH = is_2D ? src0->ne[1] : 1;
|
||||||
|
const int64_t KW = src0->ne[0];
|
||||||
|
|
||||||
|
const int64_t OH = is_2D ? dst->ne[2] : 1;
|
||||||
|
const int64_t OW = dst->ne[1];
|
||||||
|
|
||||||
|
const size_t delta_offset = src1->nb[is_2D ? 2 : 1] / 4; // nb is byte offset, src is type float32
|
||||||
|
const int64_t batch = src1->ne[3];
|
||||||
|
const size_t batch_offset = src1->nb[3] / 4; // nb is byte offset, src is type float32
|
||||||
|
|
||||||
|
if(dst->type == GGML_TYPE_F16) {
|
||||||
|
im2col_cuda_f16(src1_d, (half *) dst_d, IW, IH, OW, OH, KW, KH, IC, batch, batch_offset, delta_offset, s0, s1, p0, p1, d0, d1, stream);
|
||||||
|
} else {
|
||||||
|
im2col_cuda_f32(src1_d, (float *) dst_d, IW, IH, OW, OH, KW, KH, IC, batch, batch_offset, delta_offset, s0, s1, p0, p1, d0, d1, stream);
|
||||||
|
}
|
||||||
|
}
|
31
llama/ggml-cuda/im2col.cuh
Normal file
31
llama/ggml-cuda/im2col.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_IM2COL_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_im2col(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
247
llama/ggml-cuda/mma.cuh
Normal file
247
llama/ggml-cuda/mma.cuh
Normal file
|
@ -0,0 +1,247 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
struct mma_int_A_I16K4 {
|
||||||
|
static constexpr int I = 16;
|
||||||
|
static constexpr int K = 4;
|
||||||
|
static constexpr int ne = 2;
|
||||||
|
|
||||||
|
int x[ne] = {0};
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_i(const int l) {
|
||||||
|
const int ret = (l%2) * (I/2) + threadIdx.x / K;
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < I);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_k(const int /* l */) {
|
||||||
|
const int ret = threadIdx.x % K;
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < K);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void load(const int * __restrict__ xs0, const int & stride) {
|
||||||
|
#if defined(INT8_MMA_AVAILABLE)
|
||||||
|
const int * xs = xs0 + (threadIdx.x%I)*stride;
|
||||||
|
asm("ldmatrix.sync.aligned.m8n8.x2.b16 {%0, %1}, [%2];"
|
||||||
|
: "+r"(x[0]), "+r"(x[1])
|
||||||
|
: "l"(xs));
|
||||||
|
#else
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < ne; ++l) {
|
||||||
|
x[l] = xs0[get_i(l)*stride + get_k(l)];
|
||||||
|
}
|
||||||
|
#endif // defined(INT8_MMA_AVAILABLE)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
struct mma_int_A_I16K8 {
|
||||||
|
static constexpr int I = 16;
|
||||||
|
static constexpr int K = 8;
|
||||||
|
static constexpr int ne = 4;
|
||||||
|
|
||||||
|
int x[ne] = {0};
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_i(const int l) {
|
||||||
|
const int ret = (l%2) * (I/2) + threadIdx.x / (K/2);
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < I);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_k(const int l) {
|
||||||
|
const int ret = (l/2) * (K/2) + threadIdx.x % (K/2);
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < K);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void load(const int * __restrict__ xs0, const int & stride) {
|
||||||
|
#if defined(INT8_MMA_AVAILABLE)
|
||||||
|
const int * xs = xs0 + (threadIdx.x%I)*stride + (threadIdx.x/I)*(K/2);
|
||||||
|
asm("ldmatrix.sync.aligned.m8n8.x4.b16 {%0, %1, %2, %3}, [%4];"
|
||||||
|
: "+r"(x[0]), "+r"(x[1]), "+r"(x[2]), "+r"(x[3])
|
||||||
|
: "l"(xs));
|
||||||
|
#else
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < ne; ++l) {
|
||||||
|
x[l] = xs0[get_i(l)*stride + get_k(l)];
|
||||||
|
}
|
||||||
|
#endif // defined(INT8_MMA_AVAILABLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void load_low(const int * __restrict__ xs0, const int & stride) {
|
||||||
|
((mma_int_A_I16K4 *) x)[0].load(xs0, stride);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
struct mma_int_B_J8K4 {
|
||||||
|
static constexpr int J = 8;
|
||||||
|
static constexpr int K = 4;
|
||||||
|
static constexpr int ne = 1;
|
||||||
|
|
||||||
|
int x[ne] = {0};
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_j(const int /* l */) {
|
||||||
|
const int ret = threadIdx.x / K;
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < J);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_k(const int /* l */) {
|
||||||
|
const int ret = threadIdx.x % K;
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < K);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void load(const int * __restrict__ xs0, const int & stride) {
|
||||||
|
#if defined(INT8_MMA_AVAILABLE) && false // Loading as 4 byte values is faster
|
||||||
|
const int * xs = xs0 + (threadIdx.x%J)*stride;
|
||||||
|
asm("ldmatrix.sync.aligned.m8n8.x1.b16 {%0}, [%1];"
|
||||||
|
: "+r"(x[0])
|
||||||
|
: "l"(xs));
|
||||||
|
#else
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < ne; ++l) {
|
||||||
|
x[l] = xs0[get_j(l)*stride + get_k(l)];
|
||||||
|
}
|
||||||
|
#endif // defined(INT8_MMA_AVAILABLE)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
struct mma_int_B_J8K8 {
|
||||||
|
static constexpr int J = 8;
|
||||||
|
static constexpr int K = 8;
|
||||||
|
static constexpr int ne = 2;
|
||||||
|
|
||||||
|
int x[ne] = {0};
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_j(const int /* l */) {
|
||||||
|
const int ret = threadIdx.x / (K/2);
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < J);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_k(const int l) {
|
||||||
|
const int ret = l * (K/2) + threadIdx.x % (K/2);
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < K);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void load(const int * __restrict__ xs0, const int & stride) {
|
||||||
|
#if defined(INT8_MMA_AVAILABLE) && false // Loading as 4 byte values is faster
|
||||||
|
const int * xs = xs0 + (threadIdx.x%J)*stride + ((threadIdx.x/J)*(K/2)) % K;
|
||||||
|
asm("ldmatrix.sync.aligned.m8n8.x2.b16 {%0, %1}, [%2];"
|
||||||
|
: "+r"(x[0]), "+r"(x[1])
|
||||||
|
: "l"(xs));
|
||||||
|
#else
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < ne; ++l) {
|
||||||
|
x[l] = xs0[get_j(l)*stride + get_k(l)];
|
||||||
|
}
|
||||||
|
#endif // defined(INT8_MMA_AVAILABLE)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
struct mma_int_C_I16J8 {
|
||||||
|
static constexpr int I = 16;
|
||||||
|
static constexpr int J = 8;
|
||||||
|
static constexpr int ne = 4;
|
||||||
|
|
||||||
|
int x[ne] = {0};
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_i(const int l) {
|
||||||
|
const int ret = (l/2) * (I/2) + threadIdx.x / (J/2);
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < I);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __device__ __forceinline__ int get_j(const int l) {
|
||||||
|
const int ret = 2 * (threadIdx.x % (J/2)) + l%2;
|
||||||
|
GGML_CUDA_ASSUME(ret >= 0);
|
||||||
|
GGML_CUDA_ASSUME(ret < J);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void mma_K4(const mma_int_A_I16K4 & mma_A, const mma_int_B_J8K4 & mma_B) {
|
||||||
|
#ifdef INT8_MMA_AVAILABLE
|
||||||
|
#if __CUDA_ARCH__ >= CC_AMPERE
|
||||||
|
asm("mma.sync.aligned.m16n8k16.row.col.s32.s8.s8.s32 {%0, %1, %2, %3}, {%4, %5}, {%6}, {%0, %1, %2, %3};"
|
||||||
|
: "+r"(x[0]), "+r"(x[1]), "+r"(x[2]), "+r"(x[3])
|
||||||
|
: "r"(mma_A.x[0]), "r"(mma_A.x[1]), "r"(mma_B.x[0]));
|
||||||
|
#else
|
||||||
|
// On Turing m16n8k16 mma is not available, use 2x m8n8k16 mma instead:
|
||||||
|
asm("mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 {%0, %1}, {%2}, {%3}, {%0, %1};"
|
||||||
|
: "+r"(x[0]), "+r"(x[1])
|
||||||
|
: "r"(mma_A.x[0]), "r"(mma_B.x[0]));
|
||||||
|
asm("mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 {%0, %1}, {%2}, {%3}, {%0, %1};"
|
||||||
|
: "+r"(x[2]), "+r"(x[3])
|
||||||
|
: "r"(mma_A.x[1]), "r"(mma_B.x[0]));
|
||||||
|
#endif // __CUDA_ARCH__ >= CC_AMPERE
|
||||||
|
#else
|
||||||
|
GGML_UNUSED(mma_A);
|
||||||
|
GGML_UNUSED(mma_B);
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // INT8_MMA_AVAILABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
__device__ __forceinline__ void mma_K8(const mma_int_A_I16K8 & mma_A, const mma_int_B_J8K8 & mma_B) {
|
||||||
|
#ifdef INT8_MMA_AVAILABLE
|
||||||
|
#if __CUDA_ARCH__ >= CC_AMPERE
|
||||||
|
asm("mma.sync.aligned.m16n8k32.row.col.s32.s8.s8.s32 {%0, %1, %2, %3}, {%4, %5, %6, %7}, {%8, %9}, {%0, %1, %2, %3};"
|
||||||
|
: "+r"(x[0]), "+r"(x[1]), "+r"(x[2]), "+r"(x[3])
|
||||||
|
: "r"(mma_A.x[0]), "r"(mma_A.x[1]), "r"(mma_A.x[2]), "r"(mma_A.x[3]), "r"(mma_B.x[0]), "r"(mma_B.x[1]));
|
||||||
|
#else
|
||||||
|
// On Turing m16n8k32 mma is not available, use 4x m8n8k16 mma instead:
|
||||||
|
asm("mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 {%0, %1}, {%2}, {%3}, {%0, %1};"
|
||||||
|
: "+r"(x[0]), "+r"(x[1])
|
||||||
|
: "r"(mma_A.x[0]), "r"(mma_B.x[0]));
|
||||||
|
asm("mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 {%0, %1}, {%2}, {%3}, {%0, %1};"
|
||||||
|
: "+r"(x[2]), "+r"(x[3])
|
||||||
|
: "r"(mma_A.x[1]), "r"(mma_B.x[0]));
|
||||||
|
asm("mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 {%0, %1}, {%2}, {%3}, {%0, %1};"
|
||||||
|
: "+r"(x[0]), "+r"(x[1])
|
||||||
|
: "r"(mma_A.x[2]), "r"(mma_B.x[1]));
|
||||||
|
asm("mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 {%0, %1}, {%2}, {%3}, {%0, %1};"
|
||||||
|
: "+r"(x[2]), "+r"(x[3])
|
||||||
|
: "r"(mma_A.x[3]), "r"(mma_B.x[1]));
|
||||||
|
#endif // __CUDA_ARCH__ >= CC_AMPERE
|
||||||
|
#else
|
||||||
|
GGML_UNUSED(mma_A);
|
||||||
|
GGML_UNUSED(mma_B);
|
||||||
|
NO_DEVICE_CODE;
|
||||||
|
#endif // INT8_MMA_AVAILABLE
|
||||||
|
}
|
||||||
|
};
|
176
llama/ggml-cuda/mmq.cu
Normal file
176
llama/ggml-cuda/mmq.cu
Normal file
|
@ -0,0 +1,176 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "mmq.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_op_mul_mat_q(
|
||||||
|
ggml_backend_cuda_context & ctx,
|
||||||
|
const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst, const char * src0_dd_i, const float * src1_ddf_i,
|
||||||
|
const char * src1_ddq_i, float * dst_dd_i, const int64_t row_low, const int64_t row_high, const int64_t src1_ncols,
|
||||||
|
const int64_t src1_padded_row_size, cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
|
||||||
|
const int64_t nb01 = src0->nb[1];
|
||||||
|
|
||||||
|
const int64_t ne10 = src1->ne[0];
|
||||||
|
const int64_t ne11 = src1->ne[1];
|
||||||
|
GGML_ASSERT(ne10 % QK8_1 == 0);
|
||||||
|
|
||||||
|
const int64_t ne0 = dst->ne[0];
|
||||||
|
|
||||||
|
const int64_t row_diff = row_high - row_low;
|
||||||
|
const int64_t stride00 = nb01 / ggml_type_size(src0->type);
|
||||||
|
|
||||||
|
int id = ggml_cuda_get_device();
|
||||||
|
const int compute_capability = ggml_cuda_info().devices[id].cc;
|
||||||
|
|
||||||
|
// the main device has a larger memory buffer to hold the results from all GPUs
|
||||||
|
// nrows_dst == nrows of the matrix that the kernel writes into
|
||||||
|
const int64_t nrows_dst = id == ctx.device ? ne0 : row_diff;
|
||||||
|
|
||||||
|
const mmq_args args = {src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, stride00, src1_padded_row_size, src1_ncols, ne11, nrows_dst};
|
||||||
|
|
||||||
|
switch (src0->type) {
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q4_0>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q4_1>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q5_0>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q5_1>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q8_0>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q2_K:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q2_K>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q3_K:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q3_K>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_K:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q4_K>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_K:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q5_K>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q6_K:
|
||||||
|
mul_mat_q_case<GGML_TYPE_Q6_K>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ2_XXS:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ2_XXS>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ2_XS:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ2_XS>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ2_S:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ2_S>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ3_XXS:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ3_XXS>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ3_S:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ3_S>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ1_S:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ1_S>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ4_XS:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ4_XS>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ4_NL:
|
||||||
|
mul_mat_q_case<GGML_TYPE_IQ4_NL>(ctx, args, stream);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_UNUSED(src1);
|
||||||
|
GGML_UNUSED(dst);
|
||||||
|
GGML_UNUSED(src1_ddf_i);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool ggml_cuda_should_use_mmq(enum ggml_type type, int cc, int64_t ne11) {
|
||||||
|
#ifdef GGML_CUDA_FORCE_CUBLAS
|
||||||
|
return false;
|
||||||
|
#endif // GGML_CUDA_FORCE_CUBLAS
|
||||||
|
|
||||||
|
bool mmq_supported;
|
||||||
|
|
||||||
|
switch (type) {
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
case GGML_TYPE_Q2_K:
|
||||||
|
case GGML_TYPE_Q3_K:
|
||||||
|
case GGML_TYPE_Q4_K:
|
||||||
|
case GGML_TYPE_Q5_K:
|
||||||
|
case GGML_TYPE_Q6_K:
|
||||||
|
case GGML_TYPE_IQ2_XXS:
|
||||||
|
case GGML_TYPE_IQ2_XS:
|
||||||
|
case GGML_TYPE_IQ2_S:
|
||||||
|
case GGML_TYPE_IQ3_XXS:
|
||||||
|
case GGML_TYPE_IQ3_S:
|
||||||
|
case GGML_TYPE_IQ1_S:
|
||||||
|
case GGML_TYPE_IQ4_XS:
|
||||||
|
case GGML_TYPE_IQ4_NL:
|
||||||
|
mmq_supported = true;
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
mmq_supported = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!mmq_supported) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (int8_mma_available(cc)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (cc < MIN_CC_DP4A) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
#ifdef GGML_CUDA_FORCE_MMQ
|
||||||
|
return true;
|
||||||
|
#endif //GGML_CUDA_FORCE_MMQ
|
||||||
|
|
||||||
|
if (cc < CC_OFFSET_AMD) {
|
||||||
|
return cc < CC_VOLTA || ne11 < MMQ_DP4A_MAX_BATCH_SIZE;
|
||||||
|
}
|
||||||
|
|
||||||
|
return cc < CC_RDNA3 || ne11 < MMQ_DP4A_MAX_BATCH_SIZE;
|
||||||
|
}
|
2962
llama/ggml-cuda/mmq.cuh
Normal file
2962
llama/ggml-cuda/mmq.cuh
Normal file
File diff suppressed because it is too large
Load diff
451
llama/ggml-cuda/mmvq.cu
Normal file
451
llama/ggml-cuda/mmvq.cu
Normal file
|
@ -0,0 +1,451 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "mmvq.cuh"
|
||||||
|
#include "vecdotq.cuh"
|
||||||
|
|
||||||
|
typedef float (*vec_dot_q_cuda_t)(const void * __restrict__ vbq, const block_q8_1 * __restrict__ bq8_1, const int & kbx, const int & iqs);
|
||||||
|
|
||||||
|
static constexpr __device__ vec_dot_q_cuda_t get_vec_dot_q_cuda(ggml_type type) {
|
||||||
|
return type == GGML_TYPE_Q4_0 ? vec_dot_q4_0_q8_1 :
|
||||||
|
type == GGML_TYPE_Q4_1 ? vec_dot_q4_1_q8_1 :
|
||||||
|
type == GGML_TYPE_Q5_0 ? vec_dot_q5_0_q8_1 :
|
||||||
|
type == GGML_TYPE_Q5_1 ? vec_dot_q5_1_q8_1 :
|
||||||
|
type == GGML_TYPE_Q8_0 ? vec_dot_q8_0_q8_1 :
|
||||||
|
type == GGML_TYPE_Q2_K ? vec_dot_q2_K_q8_1 :
|
||||||
|
type == GGML_TYPE_Q3_K ? vec_dot_q3_K_q8_1 :
|
||||||
|
type == GGML_TYPE_Q4_K ? vec_dot_q4_K_q8_1 :
|
||||||
|
type == GGML_TYPE_Q5_K ? vec_dot_q5_K_q8_1 :
|
||||||
|
type == GGML_TYPE_Q6_K ? vec_dot_q6_K_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ2_XXS ? vec_dot_iq2_xxs_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ2_XS ? vec_dot_iq2_xs_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ2_S ? vec_dot_iq2_s_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ3_XXS ? vec_dot_iq3_xxs_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ1_S ? vec_dot_iq1_s_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ1_M ? vec_dot_iq1_m_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ4_NL ? vec_dot_iq4_nl_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ4_XS ? vec_dot_iq4_xs_q8_1 :
|
||||||
|
type == GGML_TYPE_IQ3_S ? vec_dot_iq3_s_q8_1 :
|
||||||
|
nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr __device__ int get_vdr_mmvq(ggml_type type) {
|
||||||
|
return type == GGML_TYPE_Q4_0 ? VDR_Q4_0_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q4_1 ? VDR_Q4_1_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q5_0 ? VDR_Q5_0_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q5_1 ? VDR_Q5_1_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q8_0 ? VDR_Q8_0_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q2_K ? VDR_Q2_K_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q3_K ? VDR_Q3_K_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q4_K ? VDR_Q4_K_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q5_K ? VDR_Q5_K_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_Q6_K ? VDR_Q6_K_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ2_XXS ? VDR_IQ2_XXS_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ2_XS ? VDR_IQ2_XS_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ2_S ? VDR_IQ2_S_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ3_XXS ? VDR_IQ3_XXS_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ3_S ? VDR_IQ3_S_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ4_NL ? VDR_IQ4_NL_Q8_1_MMVQ :
|
||||||
|
type == GGML_TYPE_IQ4_XS ? VDR_IQ4_XS_Q8_1_MMVQ :
|
||||||
|
1;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <ggml_type type, int ncols_y>
|
||||||
|
#if !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
// tell the compiler to use as many registers as it wants, see nwarps definition below
|
||||||
|
__launch_bounds__((ncols_y <= 4 ? 4 : 2)*WARP_SIZE, 1)
|
||||||
|
#endif // !(defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__))
|
||||||
|
static __global__ void mul_mat_vec_q(
|
||||||
|
const void * __restrict__ vx, const void * __restrict__ vy, float * __restrict__ dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int nrows_dst) {
|
||||||
|
|
||||||
|
constexpr int qk = ggml_cuda_type_traits<type>::qk;
|
||||||
|
constexpr int qi = ggml_cuda_type_traits<type>::qi;
|
||||||
|
constexpr int vdr = get_vdr_mmvq(type);
|
||||||
|
|
||||||
|
constexpr vec_dot_q_cuda_t vec_dot_q_cuda = get_vec_dot_q_cuda(type);
|
||||||
|
|
||||||
|
#if defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__) && (defined(RDNA2) || defined(RDNA3))
|
||||||
|
constexpr int nwarps = 1;
|
||||||
|
constexpr int rows_per_cuda_block = 1;
|
||||||
|
#else
|
||||||
|
constexpr int nwarps = ncols_y <= 4 ? 4 : 2;
|
||||||
|
constexpr int rows_per_cuda_block = ncols_y == 1 ? 1 : 2;
|
||||||
|
#endif // defined(GGML_USE_HIPBLAS) && defined(__HIP_PLATFORM_AMD__) && !defined(RDNA2) && !defined(RDNA3)
|
||||||
|
|
||||||
|
const int tid = WARP_SIZE*threadIdx.y + threadIdx.x;
|
||||||
|
const int row0 = rows_per_cuda_block*blockIdx.x;
|
||||||
|
const int blocks_per_row_x = ncols_x / qk;
|
||||||
|
const int blocks_per_col_y = nrows_y / QK8_1;
|
||||||
|
constexpr int blocks_per_iter = vdr * nwarps*WARP_SIZE / qi;
|
||||||
|
|
||||||
|
// partial sum for each thread
|
||||||
|
float tmp[ncols_y][rows_per_cuda_block] = {0.0f};
|
||||||
|
|
||||||
|
const block_q8_1 * y = (const block_q8_1 *) vy;
|
||||||
|
|
||||||
|
for (int kbx = tid / (qi/vdr); kbx < blocks_per_row_x; kbx += blocks_per_iter) {
|
||||||
|
const int kby = kbx * (qk/QK8_1); // y block index that aligns with kbx
|
||||||
|
|
||||||
|
// x block quant index when casting the quants to int
|
||||||
|
const int kqs = vdr * (tid % (qi/vdr));
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols_y; ++j) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int i = 0; i < rows_per_cuda_block; ++i) {
|
||||||
|
tmp[j][i] += vec_dot_q_cuda(vx, &y[j*blocks_per_col_y + kby], (row0 + i)*blocks_per_row_x + kbx, kqs);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
__shared__ float tmp_shared[nwarps-1 > 0 ? nwarps-1 : 1][ncols_y][rows_per_cuda_block][WARP_SIZE];
|
||||||
|
if (threadIdx.y > 0) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols_y; ++j) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int i = 0; i < rows_per_cuda_block; ++i) {
|
||||||
|
tmp_shared[threadIdx.y-1][j][i][threadIdx.x] = tmp[j][i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
if (threadIdx.y > 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums and write back result
|
||||||
|
#pragma unroll
|
||||||
|
for (int j = 0; j < ncols_y; ++j) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int i = 0; i < rows_per_cuda_block; ++i) {
|
||||||
|
#pragma unroll
|
||||||
|
for (int l = 0; l < nwarps-1; ++l) {
|
||||||
|
tmp[j][i] += tmp_shared[l][j][i][threadIdx.x];
|
||||||
|
}
|
||||||
|
tmp[j][i] = warp_reduce_sum(tmp[j][i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (threadIdx.x < rows_per_cuda_block && (rows_per_cuda_block == 1 || row0 + threadIdx.x < nrows_dst)) {
|
||||||
|
dst[j*nrows_dst + row0 + threadIdx.x] = tmp[j][threadIdx.x];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <ggml_type type>
|
||||||
|
static void mul_mat_vec_q_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(ncols_x % ggml_blck_size(type) == 0);
|
||||||
|
GGML_ASSERT(ncols_y <= MMVQ_MAX_BATCH_SIZE);
|
||||||
|
|
||||||
|
int id = ggml_cuda_get_device();
|
||||||
|
|
||||||
|
int64_t nwarps = 1;
|
||||||
|
int64_t rows_per_cuda_block = 1;
|
||||||
|
|
||||||
|
if (ggml_cuda_info().devices[id].cc < CC_RDNA2) { // NVIDIA and AMD older than RDNA2
|
||||||
|
switch(ncols_y) {
|
||||||
|
case 1:
|
||||||
|
nwarps = 4;
|
||||||
|
rows_per_cuda_block = 1;
|
||||||
|
break;
|
||||||
|
case 2:
|
||||||
|
case 3:
|
||||||
|
case 4:
|
||||||
|
nwarps = 4;
|
||||||
|
rows_per_cuda_block = 2;
|
||||||
|
break;
|
||||||
|
case 5:
|
||||||
|
case 6:
|
||||||
|
case 7:
|
||||||
|
case 8:
|
||||||
|
nwarps = 2;
|
||||||
|
rows_per_cuda_block = 2;
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const int64_t nblocks = (nrows_x + rows_per_cuda_block - 1) / rows_per_cuda_block;
|
||||||
|
const dim3 block_nums(nblocks, 1, 1);
|
||||||
|
const dim3 block_dims(WARP_SIZE, nwarps, 1);
|
||||||
|
|
||||||
|
switch (ncols_y) {
|
||||||
|
case 1:
|
||||||
|
mul_mat_vec_q<type, 1><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 2:
|
||||||
|
mul_mat_vec_q<type, 2><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 3:
|
||||||
|
mul_mat_vec_q<type, 3><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 4:
|
||||||
|
mul_mat_vec_q<type, 4><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 5:
|
||||||
|
mul_mat_vec_q<type, 5><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 6:
|
||||||
|
mul_mat_vec_q<type, 6><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 7:
|
||||||
|
mul_mat_vec_q<type, 7><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
case 8:
|
||||||
|
mul_mat_vec_q<type, 8><<<block_nums, block_dims, 0, stream>>>(vx, vy, dst, ncols_x, nrows_x, nrows_y, nrows_dst);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q4_0_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q4_0>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q4_1_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q4_1>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q5_0_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q5_0>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q5_1_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q5_1>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q8_0_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q8_0>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q2_K_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q2_K>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q3_K_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q3_K>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q4_K_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q4_K>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q5_K_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q5_K>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_q6_K_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_Q6_K>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq2_xxs_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ2_XXS>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq2_xs_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ2_XS>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq2_s_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ2_S>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq3_xxs_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ3_XXS>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq1_s_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ1_S>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq1_m_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ1_M>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq4_nl_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ4_NL>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq4_xs_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ4_XS>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mul_mat_vec_iq3_s_q8_1_cuda(
|
||||||
|
const void * vx, const void * vy, float * dst,
|
||||||
|
const int ncols_x, const int nrows_x, const int nrows_y, const int ncols_y, const int nrows_dst, cudaStream_t stream) {
|
||||||
|
|
||||||
|
mul_mat_vec_q_cuda<GGML_TYPE_IQ3_S>(vx, vy, dst, ncols_x, nrows_x, nrows_y, ncols_y, nrows_dst, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_mul_mat_vec_q(
|
||||||
|
ggml_backend_cuda_context & ctx,
|
||||||
|
const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst, const char * src0_dd_i, const float * src1_ddf_i,
|
||||||
|
const char * src1_ddq_i, float * dst_dd_i, const int64_t row_low, const int64_t row_high, const int64_t src1_ncols,
|
||||||
|
const int64_t src1_padded_row_size, cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t row_diff = row_high - row_low;
|
||||||
|
|
||||||
|
const int64_t ne10 = src1->ne[0];
|
||||||
|
GGML_ASSERT(ne10 % QK8_1 == 0);
|
||||||
|
|
||||||
|
const int64_t ne0 = dst->ne[0];
|
||||||
|
|
||||||
|
int id = ggml_cuda_get_device();
|
||||||
|
|
||||||
|
// the main device has a larger memory buffer to hold the results from all GPUs
|
||||||
|
// nrows_dst == nrows of the matrix that the kernel writes into
|
||||||
|
const int64_t nrows_dst = id == ctx.device ? ne0 : row_diff;
|
||||||
|
|
||||||
|
switch (src0->type) {
|
||||||
|
case GGML_TYPE_Q4_0:
|
||||||
|
mul_mat_vec_q4_0_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_1:
|
||||||
|
mul_mat_vec_q4_1_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_0:
|
||||||
|
mul_mat_vec_q5_0_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_1:
|
||||||
|
mul_mat_vec_q5_1_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q8_0:
|
||||||
|
mul_mat_vec_q8_0_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q2_K:
|
||||||
|
mul_mat_vec_q2_K_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q3_K:
|
||||||
|
mul_mat_vec_q3_K_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q4_K:
|
||||||
|
mul_mat_vec_q4_K_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q5_K:
|
||||||
|
mul_mat_vec_q5_K_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_Q6_K:
|
||||||
|
mul_mat_vec_q6_K_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ2_XXS:
|
||||||
|
mul_mat_vec_iq2_xxs_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ2_XS:
|
||||||
|
mul_mat_vec_iq2_xs_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ2_S:
|
||||||
|
mul_mat_vec_iq2_s_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ3_XXS:
|
||||||
|
mul_mat_vec_iq3_xxs_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ1_S:
|
||||||
|
mul_mat_vec_iq1_s_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ1_M:
|
||||||
|
mul_mat_vec_iq1_m_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ4_NL:
|
||||||
|
mul_mat_vec_iq4_nl_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ4_XS:
|
||||||
|
mul_mat_vec_iq4_xs_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
case GGML_TYPE_IQ3_S:
|
||||||
|
mul_mat_vec_iq3_s_q8_1_cuda(src0_dd_i, src1_ddq_i, dst_dd_i, ne00, row_diff, src1_padded_row_size, src1_ncols, nrows_dst, stream);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
GGML_UNUSED(src1);
|
||||||
|
GGML_UNUSED(dst);
|
||||||
|
GGML_UNUSED(src1_ddf_i);
|
||||||
|
GGML_UNUSED(src1_ncols);
|
||||||
|
GGML_UNUSED(src1_padded_row_size);
|
||||||
|
}
|
35
llama/ggml-cuda/mmvq.cuh
Normal file
35
llama/ggml-cuda/mmvq.cuh
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define MMVQ_MAX_BATCH_SIZE 8 // Max. batch size for which to use MMVQ kernels.
|
||||||
|
|
||||||
|
void ggml_cuda_op_mul_mat_vec_q(
|
||||||
|
ggml_backend_cuda_context & ctx,
|
||||||
|
const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst, const char * src0_dd_i, const float * src1_ddf_i,
|
||||||
|
const char * src1_ddq_i, float * dst_dd_i, const int64_t row_low, const int64_t row_high, const int64_t src1_ncols,
|
||||||
|
const int64_t src1_padded_row_size, cudaStream_t stream);
|
250
llama/ggml-cuda/norm.cu
Normal file
250
llama/ggml-cuda/norm.cu
Normal file
|
@ -0,0 +1,250 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "norm.cuh"
|
||||||
|
|
||||||
|
template <int block_size>
|
||||||
|
static __global__ void norm_f32(const float * x, float * dst, const int ncols, const float eps) {
|
||||||
|
const int row = blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
const int tid = threadIdx.x;
|
||||||
|
|
||||||
|
float2 mean_var = make_float2(0.f, 0.f);
|
||||||
|
|
||||||
|
for (int col = tid; col < ncols; col += block_size) {
|
||||||
|
const float xi = x[row*ncols + col];
|
||||||
|
mean_var.x += xi;
|
||||||
|
mean_var.y += xi * xi;
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums
|
||||||
|
mean_var = warp_reduce_sum(mean_var);
|
||||||
|
if (block_size > WARP_SIZE) {
|
||||||
|
__shared__ float2 s_sum[32];
|
||||||
|
int warp_id = threadIdx.x / WARP_SIZE;
|
||||||
|
int lane_id = threadIdx.x % WARP_SIZE;
|
||||||
|
if (lane_id == 0) {
|
||||||
|
s_sum[warp_id] = mean_var;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
mean_var = s_sum[lane_id];
|
||||||
|
mean_var = warp_reduce_sum(mean_var);
|
||||||
|
}
|
||||||
|
|
||||||
|
const float mean = mean_var.x / ncols;
|
||||||
|
const float var = mean_var.y / ncols - mean * mean;
|
||||||
|
const float inv_std = rsqrtf(var + eps);
|
||||||
|
|
||||||
|
for (int col = tid; col < ncols; col += block_size) {
|
||||||
|
dst[row*ncols + col] = (x[row*ncols + col] - mean) * inv_std;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int block_size>
|
||||||
|
static __global__ void group_norm_f32(const float * x, float * dst, const int group_size, const int ne_elements, const float eps) {
|
||||||
|
// blockIdx.x: num_groups idx
|
||||||
|
// threadIdx.x: block_size idx
|
||||||
|
int start = blockIdx.x * group_size;
|
||||||
|
int end = start + group_size;
|
||||||
|
|
||||||
|
start += threadIdx.x;
|
||||||
|
|
||||||
|
if (end >= ne_elements) {
|
||||||
|
end = ne_elements;
|
||||||
|
}
|
||||||
|
|
||||||
|
float tmp = 0.0f; // partial sum for thread in warp
|
||||||
|
|
||||||
|
for (int j = start; j < end; j += block_size) {
|
||||||
|
tmp += x[j];
|
||||||
|
}
|
||||||
|
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
if (block_size > WARP_SIZE) {
|
||||||
|
__shared__ float s_sum[32];
|
||||||
|
int warp_id = threadIdx.x / WARP_SIZE;
|
||||||
|
int lane_id = threadIdx.x % WARP_SIZE;
|
||||||
|
if (lane_id == 0) {
|
||||||
|
s_sum[warp_id] = tmp;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
tmp = s_sum[lane_id];
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
}
|
||||||
|
|
||||||
|
float mean = tmp / group_size;
|
||||||
|
tmp = 0.0f;
|
||||||
|
|
||||||
|
for (int j = start; j < end; j += block_size) {
|
||||||
|
float xi = x[j] - mean;
|
||||||
|
dst[j] = xi;
|
||||||
|
tmp += xi * xi;
|
||||||
|
}
|
||||||
|
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
if (block_size > WARP_SIZE) {
|
||||||
|
__shared__ float s_sum[32];
|
||||||
|
int warp_id = threadIdx.x / WARP_SIZE;
|
||||||
|
int lane_id = threadIdx.x % WARP_SIZE;
|
||||||
|
if (lane_id == 0) {
|
||||||
|
s_sum[warp_id] = tmp;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
tmp = s_sum[lane_id];
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
}
|
||||||
|
|
||||||
|
float variance = tmp / group_size;
|
||||||
|
float scale = rsqrtf(variance + eps);
|
||||||
|
for (int j = start; j < end; j += block_size) {
|
||||||
|
dst[j] *= scale;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <int block_size>
|
||||||
|
static __global__ void rms_norm_f32(const float * x, float * dst, const int ncols, const float eps) {
|
||||||
|
const int row = blockIdx.x*blockDim.y + threadIdx.y;
|
||||||
|
const int tid = threadIdx.x;
|
||||||
|
|
||||||
|
float tmp = 0.0f; // partial sum for thread in warp
|
||||||
|
|
||||||
|
for (int col = tid; col < ncols; col += block_size) {
|
||||||
|
const float xi = x[row*ncols + col];
|
||||||
|
tmp += xi * xi;
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum up partial sums
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
if (block_size > WARP_SIZE) {
|
||||||
|
__shared__ float s_sum[32];
|
||||||
|
int warp_id = threadIdx.x / WARP_SIZE;
|
||||||
|
int lane_id = threadIdx.x % WARP_SIZE;
|
||||||
|
if (lane_id == 0) {
|
||||||
|
s_sum[warp_id] = tmp;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
tmp = s_sum[lane_id];
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
}
|
||||||
|
|
||||||
|
const float mean = tmp / ncols;
|
||||||
|
const float scale = rsqrtf(mean + eps);
|
||||||
|
|
||||||
|
for (int col = tid; col < ncols; col += block_size) {
|
||||||
|
dst[row*ncols + col] = scale * x[row*ncols + col];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void norm_f32_cuda(const float * x, float * dst, const int ncols, const int nrows, const float eps, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % WARP_SIZE == 0);
|
||||||
|
if (ncols < 1024) {
|
||||||
|
const dim3 block_dims(WARP_SIZE, 1, 1);
|
||||||
|
norm_f32<WARP_SIZE><<<nrows, block_dims, 0, stream>>>(x, dst, ncols, eps);
|
||||||
|
} else {
|
||||||
|
const dim3 block_dims(1024, 1, 1);
|
||||||
|
norm_f32<1024><<<nrows, block_dims, 0, stream>>>(x, dst, ncols, eps);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void group_norm_f32_cuda(const float * x, float * dst, const int num_groups, const float eps, const int group_size, const int ne_elements, cudaStream_t stream) {
|
||||||
|
if (group_size < 1024) {
|
||||||
|
const dim3 block_dims(WARP_SIZE, 1, 1);
|
||||||
|
group_norm_f32<WARP_SIZE><<<num_groups, block_dims, 0, stream>>>(x, dst, group_size, ne_elements, eps);
|
||||||
|
} else {
|
||||||
|
const dim3 block_dims(1024, 1, 1);
|
||||||
|
group_norm_f32<1024><<<num_groups, block_dims, 0, stream>>>(x, dst, group_size, ne_elements, eps);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void rms_norm_f32_cuda(const float * x, float * dst, const int ncols, const int nrows, const float eps, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ncols % WARP_SIZE == 0);
|
||||||
|
if (ncols < 1024) {
|
||||||
|
const dim3 block_dims(WARP_SIZE, 1, 1);
|
||||||
|
rms_norm_f32<WARP_SIZE><<<nrows, block_dims, 0, stream>>>(x, dst, ncols, eps);
|
||||||
|
} else {
|
||||||
|
const dim3 block_dims(1024, 1, 1);
|
||||||
|
rms_norm_f32<1024><<<nrows, block_dims, 0, stream>>>(x, dst, ncols, eps);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_norm(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t nrows = ggml_nrows(src0);
|
||||||
|
|
||||||
|
float eps;
|
||||||
|
memcpy(&eps, dst->op_params, sizeof(float));
|
||||||
|
|
||||||
|
norm_f32_cuda(src0_d, dst_d, ne00, nrows, eps, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_group_norm(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
int num_groups = dst->op_params[0];
|
||||||
|
|
||||||
|
float eps;
|
||||||
|
memcpy(&eps, dst->op_params + 1, sizeof(float));
|
||||||
|
|
||||||
|
int group_size = src0->ne[0] * src0->ne[1] * ((src0->ne[2] + num_groups - 1) / num_groups);
|
||||||
|
group_norm_f32_cuda(src0_d, dst_d, num_groups * src0->ne[3], eps, group_size, ggml_nelements(src0), stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_rms_norm(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t nrows = ggml_nrows(src0);
|
||||||
|
|
||||||
|
float eps;
|
||||||
|
memcpy(&eps, dst->op_params, sizeof(float));
|
||||||
|
|
||||||
|
rms_norm_f32_cuda(src0_d, dst_d, ne00, nrows, eps, stream);
|
||||||
|
}
|
33
llama/ggml-cuda/norm.cuh
Normal file
33
llama/ggml-cuda/norm.cuh
Normal file
|
@ -0,0 +1,33 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void ggml_cuda_op_norm(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
|
||||||
|
void ggml_cuda_op_group_norm(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
||||||
|
|
||||||
|
void ggml_cuda_op_rms_norm(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
75
llama/ggml-cuda/pad.cu
Normal file
75
llama/ggml-cuda/pad.cu
Normal file
|
@ -0,0 +1,75 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "pad.cuh"
|
||||||
|
|
||||||
|
static __global__ void pad_f32(const float * x, float * dst, const int ne0, const int ne00, const int ne01, const int ne02, const int ne03) {
|
||||||
|
// blockIdx.z: idx of ne2*ne3, aka ne02*ne03
|
||||||
|
// blockIdx.y: idx of ne1
|
||||||
|
// blockIDx.x: idx of ne0 / BLOCK_SIZE
|
||||||
|
int nidx = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (nidx >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// operation
|
||||||
|
int offset_dst =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne0 +
|
||||||
|
blockIdx.z * ne0 * gridDim.y;
|
||||||
|
if (nidx < ne00 && blockIdx.y < ne01 && blockIdx.z < ne02*ne03) {
|
||||||
|
int offset_src =
|
||||||
|
nidx +
|
||||||
|
blockIdx.y * ne00 +
|
||||||
|
blockIdx.z * ne00 * ne01;
|
||||||
|
dst[offset_dst] = x[offset_src];
|
||||||
|
} else {
|
||||||
|
dst[offset_dst] = 0.0f;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void pad_f32_cuda(const float * x, float * dst,
|
||||||
|
const int ne00, const int ne01, const int ne02, const int ne03,
|
||||||
|
const int ne0, const int ne1, const int ne2, const int ne3, cudaStream_t stream) {
|
||||||
|
int num_blocks = (ne0 + CUDA_PAD_BLOCK_SIZE - 1) / CUDA_PAD_BLOCK_SIZE;
|
||||||
|
dim3 gridDim(num_blocks, ne1, ne2*ne3);
|
||||||
|
pad_f32<<<gridDim, CUDA_PAD_BLOCK_SIZE, 0, stream>>>(x, dst, ne0, ne00, ne01, ne02, ne03);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_pad(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(dst->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(src0->ne[3] == 1 && dst->ne[3] == 1); // just 3D tensors
|
||||||
|
|
||||||
|
pad_f32_cuda(src0_d, dst_d,
|
||||||
|
src0->ne[0], src0->ne[1], src0->ne[2], src0->ne[3],
|
||||||
|
dst->ne[0], dst->ne[1], dst->ne[2], dst->ne[3], stream);
|
||||||
|
}
|
31
llama/ggml-cuda/pad.cuh
Normal file
31
llama/ggml-cuda/pad.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_PAD_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_pad(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
120
llama/ggml-cuda/pool2d.cu
Normal file
120
llama/ggml-cuda/pool2d.cu
Normal file
|
@ -0,0 +1,120 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "pool2d.cuh"
|
||||||
|
|
||||||
|
template <typename Ti, typename To>
|
||||||
|
static __global__ void pool2d_nchw_kernel(
|
||||||
|
const int ih, const int iw, const int oh, const int ow,
|
||||||
|
const int kh, const int kw, const int sh, const int sw,
|
||||||
|
const int ph, const int pw, const int parallel_elements,
|
||||||
|
const Ti* src, To* dst, const enum ggml_op_pool op) {
|
||||||
|
int idx = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
if (idx >= parallel_elements) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int I_HW = ih * iw;
|
||||||
|
const int O_HW = oh * ow;
|
||||||
|
const int nc = idx / O_HW;
|
||||||
|
const int cur_oh = idx % O_HW / ow;
|
||||||
|
const int cur_ow = idx % O_HW % ow;
|
||||||
|
const Ti* i_ptr = src + nc * I_HW;
|
||||||
|
To* o_ptr = dst + nc * O_HW;
|
||||||
|
const int start_h = cur_oh * sh - ph;
|
||||||
|
const int bh = max(0, start_h);
|
||||||
|
const int eh = min(ih, start_h + kh);
|
||||||
|
const int start_w = cur_ow * sw - pw;
|
||||||
|
const int bw = max(0, start_w);
|
||||||
|
const int ew = min(iw, start_w + kw);
|
||||||
|
const To scale = 1. / (kh * kw);
|
||||||
|
To res = 0;
|
||||||
|
|
||||||
|
switch (op) {
|
||||||
|
case GGML_OP_POOL_AVG: res = 0; break;
|
||||||
|
case GGML_OP_POOL_MAX: res = -FLT_MAX; break;
|
||||||
|
default: assert(false);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int i = bh; i < eh; i += 1) {
|
||||||
|
for (int j = bw; j < ew; j += 1) {
|
||||||
|
#if __CUDA_ARCH__ >= 350
|
||||||
|
Ti cur = __ldg(i_ptr + i * iw + j);
|
||||||
|
#else
|
||||||
|
Ti cur = i_ptr[i * iw + j];
|
||||||
|
#endif
|
||||||
|
switch (op) {
|
||||||
|
case GGML_OP_POOL_AVG: res += cur * scale; break;
|
||||||
|
case GGML_OP_POOL_MAX: res = max(res, (To)cur); break;
|
||||||
|
default: assert(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
o_ptr[cur_oh * ow + cur_ow] = res;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void pool2d_nchw_kernel_f32_f32_cuda(
|
||||||
|
const int ih, const int iw, const int oh, const int ow,
|
||||||
|
const int kh, const int kw, const int sh, const int sw,
|
||||||
|
const int ph, const int pw, const int parallel_elements,
|
||||||
|
const float * src, float * dst, const enum ggml_op_pool op,
|
||||||
|
cudaStream_t stream) {
|
||||||
|
|
||||||
|
const int num_blocks = (parallel_elements + CUDA_POOL2D_BLOCK_SIZE - 1) / CUDA_POOL2D_BLOCK_SIZE;
|
||||||
|
dim3 block_nums(num_blocks);
|
||||||
|
pool2d_nchw_kernel<<<block_nums, CUDA_POOL2D_BLOCK_SIZE, 0, stream>>>(ih, iw, oh, ow, kh, kw, sh, sw, ph, pw, parallel_elements, src, dst, op);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_pool2d(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
const int32_t * opts = (const int32_t *)dst->op_params;
|
||||||
|
enum ggml_op_pool op = static_cast<ggml_op_pool>(opts[0]);
|
||||||
|
const int k0 = opts[1];
|
||||||
|
const int k1 = opts[2];
|
||||||
|
const int s0 = opts[3];
|
||||||
|
const int s1 = opts[4];
|
||||||
|
const int p0 = opts[5];
|
||||||
|
const int p1 = opts[6];
|
||||||
|
|
||||||
|
const int64_t IH = src0->ne[1];
|
||||||
|
const int64_t IW = src0->ne[0];
|
||||||
|
|
||||||
|
const int64_t N = dst->ne[3];
|
||||||
|
const int64_t OC = dst->ne[2];
|
||||||
|
const int64_t OH = dst->ne[1];
|
||||||
|
const int64_t OW = dst->ne[0];
|
||||||
|
|
||||||
|
const int parallel_elements = N * OC * OH * OW;
|
||||||
|
|
||||||
|
pool2d_nchw_kernel_f32_f32_cuda(IH, IW, OH, OW, k1, k0, s1, s0, p1, p0, parallel_elements, src0_d, dst_d, op, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/pool2d.cuh
Normal file
31
llama/ggml-cuda/pool2d.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_POOL2D_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_pool2d(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
195
llama/ggml-cuda/quantize.cu
Normal file
195
llama/ggml-cuda/quantize.cu
Normal file
|
@ -0,0 +1,195 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "quantize.cuh"
|
||||||
|
#include <cstdint>
|
||||||
|
|
||||||
|
static __global__ void quantize_q8_1(const float * __restrict__ x, void * __restrict__ vy, const int64_t kx, const int64_t kx0_padded) {
|
||||||
|
const int64_t ix0 = (int64_t)blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (ix0 >= kx0_padded) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int64_t ix1 = blockIdx.y;
|
||||||
|
|
||||||
|
const int64_t i_padded = ix1*kx0_padded + ix0;
|
||||||
|
|
||||||
|
block_q8_1 * y = (block_q8_1 *) vy;
|
||||||
|
|
||||||
|
const int64_t ib = i_padded / QK8_1; // block index
|
||||||
|
const int64_t iqs = i_padded % QK8_1; // quant index
|
||||||
|
|
||||||
|
const float xi = ix0 < kx ? x[ix1*kx + ix0] : 0.0f;
|
||||||
|
float amax = fabsf(xi);
|
||||||
|
float sum = xi;
|
||||||
|
|
||||||
|
amax = warp_reduce_max(amax);
|
||||||
|
sum = warp_reduce_sum(sum);
|
||||||
|
|
||||||
|
const float d = amax / 127;
|
||||||
|
const int8_t q = amax == 0.0f ? 0 : roundf(xi / d);
|
||||||
|
|
||||||
|
y[ib].qs[iqs] = q;
|
||||||
|
|
||||||
|
if (iqs > 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
reinterpret_cast<half&>(y[ib].ds.x) = d;
|
||||||
|
reinterpret_cast<half&>(y[ib].ds.y) = sum;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <mmq_q8_1_ds_layout ds_layout>
|
||||||
|
static __global__ void quantize_mmq_q8_1(
|
||||||
|
const float * __restrict__ x, void * __restrict__ vy, const int64_t kx0, const int64_t kx1, const int64_t kx0_padded) {
|
||||||
|
|
||||||
|
constexpr int vals_per_scale = ds_layout == MMQ_Q8_1_DS_LAYOUT_D2S6 ? 64 : 32;
|
||||||
|
constexpr int vals_per_sum = ds_layout == MMQ_Q8_1_DS_LAYOUT_D2S6 ? 16 : 32;
|
||||||
|
|
||||||
|
const int64_t ix0 = ((int64_t)blockDim.x*blockIdx.x + threadIdx.x)*4;
|
||||||
|
|
||||||
|
if (ix0 >= kx0_padded) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float4 * x4 = (const float4 *) x;
|
||||||
|
|
||||||
|
const int64_t ix1 = kx1*blockIdx.z + blockIdx.y;
|
||||||
|
|
||||||
|
block_q8_1_mmq * y = (block_q8_1_mmq *) vy;
|
||||||
|
|
||||||
|
const int64_t ib0 = blockIdx.z*((int64_t)gridDim.y*gridDim.x*blockDim.x/QK8_1); // first block of channel
|
||||||
|
const int64_t ib = ib0 + (ix0 / (4*QK8_1))*kx1 + blockIdx.y; // block index in channel
|
||||||
|
const int64_t iqs = ix0 % (4*QK8_1); // quant index in block
|
||||||
|
|
||||||
|
// Load 4 floats per thread and calculate max. abs. value between them:
|
||||||
|
const float4 xi = ix0 < kx0 ? x4[(ix1*kx0 + ix0)/4] : make_float4(0.0f, 0.0f, 0.0f, 0.0f);
|
||||||
|
float amax = fabsf(xi.x);
|
||||||
|
amax = fmaxf(amax, fabsf(xi.y));
|
||||||
|
amax = fmaxf(amax, fabsf(xi.z));
|
||||||
|
amax = fmaxf(amax, fabsf(xi.w));
|
||||||
|
|
||||||
|
// Exchange max. abs. value between vals_per_scale/4 threads.
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = vals_per_scale/8; mask > 0; mask >>= 1) {
|
||||||
|
amax = fmaxf(amax, __shfl_xor_sync(0xFFFFFFFF, amax, mask, WARP_SIZE));
|
||||||
|
}
|
||||||
|
|
||||||
|
float sum;
|
||||||
|
if (ds_layout != MMQ_Q8_1_DS_LAYOUT_D4) {
|
||||||
|
sum = xi.x + xi.y + xi.z + xi.w;
|
||||||
|
|
||||||
|
// Exchange calculate sum across vals_per_sum/4 threads.
|
||||||
|
#pragma unroll
|
||||||
|
for (int mask = vals_per_sum/8; mask > 0; mask >>= 1) {
|
||||||
|
sum += __shfl_xor_sync(0xFFFFFFFF, sum, mask, WARP_SIZE);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d_inv = 127.0f / amax;
|
||||||
|
char4 q;
|
||||||
|
q.x = roundf(xi.x*d_inv);
|
||||||
|
q.y = roundf(xi.y*d_inv);
|
||||||
|
q.z = roundf(xi.z*d_inv);
|
||||||
|
q.w = roundf(xi.w*d_inv);
|
||||||
|
|
||||||
|
// Write back 4 int8 values as a single 32 bit value for better memroy bandwidth:
|
||||||
|
char4 * yqs4 = (char4 *) y[ib].qs;
|
||||||
|
yqs4[iqs/4] = q;
|
||||||
|
|
||||||
|
if (ds_layout == MMQ_Q8_1_DS_LAYOUT_D2S6) {
|
||||||
|
if (iqs % 16 != 0 || iqs >= 96) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
y[ib].d2s6[2 + iqs/16] = sum;
|
||||||
|
|
||||||
|
if (iqs % 64 != 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = 1.0f / d_inv;
|
||||||
|
|
||||||
|
y[ib].d2s6[iqs/64] = d;
|
||||||
|
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (iqs % 32 != 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float d = 1.0f / d_inv;
|
||||||
|
|
||||||
|
if (ds_layout == MMQ_Q8_1_DS_LAYOUT_DS4) {
|
||||||
|
y[ib].ds4[iqs/32] = make_half2(d, sum);
|
||||||
|
} else {
|
||||||
|
y[ib].d4[iqs/32] = d;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void quantize_row_q8_1_cuda(
|
||||||
|
const float * x, void * vy, const int64_t kx0, const int64_t kx1, const int64_t channels,
|
||||||
|
const int64_t kx0_padded, const ggml_type type_x, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(kx0_padded % QK8_1 == 0);
|
||||||
|
|
||||||
|
const int64_t block_num_x = (kx0_padded + CUDA_QUANTIZE_BLOCK_SIZE - 1) / CUDA_QUANTIZE_BLOCK_SIZE;
|
||||||
|
const dim3 num_blocks(block_num_x, kx1*channels, 1);
|
||||||
|
const dim3 block_size(CUDA_QUANTIZE_BLOCK_SIZE, 1, 1);
|
||||||
|
quantize_q8_1<<<num_blocks, block_size, 0, stream>>>(x, vy, kx0, kx0_padded);
|
||||||
|
|
||||||
|
GGML_UNUSED(type_x);
|
||||||
|
}
|
||||||
|
|
||||||
|
void quantize_mmq_q8_1_cuda(
|
||||||
|
const float * x, void * vy, const int64_t kx0, const int64_t kx1, const int64_t channels,
|
||||||
|
const int64_t kx0_padded, const ggml_type type_x, cudaStream_t stream) {
|
||||||
|
|
||||||
|
GGML_ASSERT(kx0_padded % (4*QK8_1) == 0);
|
||||||
|
|
||||||
|
const int64_t block_num_x = (kx0_padded + 4*CUDA_QUANTIZE_BLOCK_SIZE_MMQ - 1) / (4*CUDA_QUANTIZE_BLOCK_SIZE_MMQ);
|
||||||
|
const dim3 num_blocks(block_num_x, kx1, channels);
|
||||||
|
const dim3 block_size(CUDA_QUANTIZE_BLOCK_SIZE_MMQ, 1, 1);
|
||||||
|
switch (mmq_get_q8_1_ds_layout(type_x)) {
|
||||||
|
case MMQ_Q8_1_DS_LAYOUT_D4:
|
||||||
|
quantize_mmq_q8_1<MMQ_Q8_1_DS_LAYOUT_D4>
|
||||||
|
<<<num_blocks, block_size, 0, stream>>>(x, vy, kx0, kx1, kx0_padded);
|
||||||
|
break;
|
||||||
|
case MMQ_Q8_1_DS_LAYOUT_DS4:
|
||||||
|
quantize_mmq_q8_1<MMQ_Q8_1_DS_LAYOUT_DS4>
|
||||||
|
<<<num_blocks, block_size, 0, stream>>>(x, vy, kx0, kx1, kx0_padded);
|
||||||
|
break;
|
||||||
|
case MMQ_Q8_1_DS_LAYOUT_D2S6:
|
||||||
|
quantize_mmq_q8_1<MMQ_Q8_1_DS_LAYOUT_D2S6>
|
||||||
|
<<<num_blocks, block_size, 0, stream>>>(x, vy, kx0, kx1, kx0_padded);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
50
llama/ggml-cuda/quantize.cuh
Normal file
50
llama/ggml-cuda/quantize.cuh
Normal file
|
@ -0,0 +1,50 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "mmq.cuh"
|
||||||
|
|
||||||
|
#include <cstdint>
|
||||||
|
|
||||||
|
#define CUDA_QUANTIZE_BLOCK_SIZE 256
|
||||||
|
#define CUDA_QUANTIZE_BLOCK_SIZE_MMQ 128
|
||||||
|
|
||||||
|
static_assert(MATRIX_ROW_PADDING % CUDA_QUANTIZE_BLOCK_SIZE == 0, "Risk of out-of-bounds access.");
|
||||||
|
static_assert(MATRIX_ROW_PADDING % (4*CUDA_QUANTIZE_BLOCK_SIZE_MMQ) == 0, "Risk of out-of-bounds access.");
|
||||||
|
|
||||||
|
typedef void (*quantize_cuda_t)(
|
||||||
|
const float * x, void * vy, const int64_t kx0, const int64_t kx1, const int64_t channels, const int64_t kx0_padded,
|
||||||
|
const ggml_type type_x, cudaStream_t stream);
|
||||||
|
|
||||||
|
void quantize_row_q8_1_cuda(
|
||||||
|
const float * x, void * vy, const int64_t kx0, const int64_t kx1, const int64_t channels, const int64_t kx0_padded,
|
||||||
|
const ggml_type type_x, cudaStream_t stream);
|
||||||
|
|
||||||
|
void quantize_mmq_q8_1_cuda(
|
||||||
|
const float * x, void * vy, const int64_t kx0, const int64_t kx1, const int64_t channels, const int64_t kx0_padded,
|
||||||
|
const ggml_type type_x, cudaStream_t stream);
|
297
llama/ggml-cuda/rope.cu
Normal file
297
llama/ggml-cuda/rope.cu
Normal file
|
@ -0,0 +1,297 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "rope.cuh"
|
||||||
|
|
||||||
|
struct rope_corr_dims {
|
||||||
|
float v[2];
|
||||||
|
};
|
||||||
|
|
||||||
|
static __device__ float rope_yarn_ramp(const float low, const float high, const int i0) {
|
||||||
|
const float y = (i0 / 2 - low) / max(0.001f, high - low);
|
||||||
|
return 1.0f - min(1.0f, max(0.0f, y));
|
||||||
|
}
|
||||||
|
|
||||||
|
// YaRN algorithm based on LlamaYaRNScaledRotaryEmbedding.py from https://github.com/jquesnelle/yarn
|
||||||
|
// MIT licensed. Copyright (c) 2023 Jeffrey Quesnelle and Bowen Peng.
|
||||||
|
static __device__ void rope_yarn(
|
||||||
|
float theta_extrap, float freq_scale, rope_corr_dims corr_dims, int64_t i0, float ext_factor, float mscale,
|
||||||
|
float * cos_theta, float * sin_theta) {
|
||||||
|
// Get n-d rotational scaling corrected for extrapolation
|
||||||
|
float theta_interp = freq_scale * theta_extrap;
|
||||||
|
float theta = theta_interp;
|
||||||
|
if (ext_factor != 0.0f) {
|
||||||
|
float ramp_mix = rope_yarn_ramp(corr_dims.v[0], corr_dims.v[1], i0) * ext_factor;
|
||||||
|
theta = theta_interp * (1 - ramp_mix) + theta_extrap * ramp_mix;
|
||||||
|
|
||||||
|
// Get n-d magnitude scaling corrected for interpolation
|
||||||
|
mscale *= 1.0f + 0.1f * logf(1.0f / freq_scale);
|
||||||
|
}
|
||||||
|
*cos_theta = cosf(theta) * mscale;
|
||||||
|
*sin_theta = sinf(theta) * mscale;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T, bool has_ff>
|
||||||
|
static __global__ void rope_norm(
|
||||||
|
const T * x, T * dst, int ne0, int n_dims, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float ext_factor, float attn_factor, rope_corr_dims corr_dims, float theta_scale, const float * freq_factors) {
|
||||||
|
const int i0 = 2*(blockDim.y*blockIdx.y + threadIdx.y);
|
||||||
|
|
||||||
|
if (i0 >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int row = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (i0 >= n_dims) {
|
||||||
|
const int i = row*ne0 + i0;
|
||||||
|
|
||||||
|
dst[i + 0] = x[i + 0];
|
||||||
|
dst[i + 1] = x[i + 1];
|
||||||
|
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i = row*ne0 + i0;
|
||||||
|
const int i2 = row/p_delta_rows;
|
||||||
|
|
||||||
|
const float theta_base = pos[i2]*powf(theta_scale, i0/2.0f);
|
||||||
|
|
||||||
|
const float freq_factor = has_ff ? freq_factors[i0/2] : 1.0f;
|
||||||
|
|
||||||
|
float cos_theta;
|
||||||
|
float sin_theta;
|
||||||
|
|
||||||
|
rope_yarn(theta_base/freq_factor, freq_scale, corr_dims, i0, ext_factor, attn_factor, &cos_theta, &sin_theta);
|
||||||
|
|
||||||
|
const float x0 = x[i + 0];
|
||||||
|
const float x1 = x[i + 1];
|
||||||
|
|
||||||
|
dst[i + 0] = x0*cos_theta - x1*sin_theta;
|
||||||
|
dst[i + 1] = x0*sin_theta + x1*cos_theta;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T, bool has_ff>
|
||||||
|
static __global__ void rope_neox(
|
||||||
|
const T * x, T * dst, int ne0, int n_dims, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float ext_factor, float attn_factor, rope_corr_dims corr_dims, float theta_scale, const float * freq_factors) {
|
||||||
|
const int i0 = 2*(blockDim.y*blockIdx.y + threadIdx.y);
|
||||||
|
|
||||||
|
if (i0 >= ne0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int row = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (i0 >= n_dims) {
|
||||||
|
const int i = row*ne0 + i0;
|
||||||
|
|
||||||
|
dst[i + 0] = x[i + 0];
|
||||||
|
dst[i + 1] = x[i + 1];
|
||||||
|
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int i = row*ne0 + i0/2;
|
||||||
|
const int i2 = row/p_delta_rows;
|
||||||
|
|
||||||
|
const float theta_base = pos[i2]*powf(theta_scale, i0/2.0f);
|
||||||
|
|
||||||
|
const float freq_factor = has_ff ? freq_factors[i0/2] : 1.0f;
|
||||||
|
|
||||||
|
float cos_theta;
|
||||||
|
float sin_theta;
|
||||||
|
|
||||||
|
rope_yarn(theta_base/freq_factor, freq_scale, corr_dims, i0, ext_factor, attn_factor, &cos_theta, &sin_theta);
|
||||||
|
|
||||||
|
const float x0 = x[i + 0];
|
||||||
|
const float x1 = x[i + n_dims/2];
|
||||||
|
|
||||||
|
dst[i + 0] = x0*cos_theta - x1*sin_theta;
|
||||||
|
dst[i + n_dims/2] = x0*sin_theta + x1*cos_theta;
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
static void rope_norm_cuda(
|
||||||
|
const T * x, T * dst, int ne0, int n_dims, int nr, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float freq_base, float ext_factor, float attn_factor, rope_corr_dims corr_dims, const float * freq_factors, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ne0 % 2 == 0);
|
||||||
|
const dim3 block_dims(1, CUDA_ROPE_BLOCK_SIZE, 1);
|
||||||
|
const int n_blocks_x = (ne0 + 2*CUDA_ROPE_BLOCK_SIZE - 1) / (2*CUDA_ROPE_BLOCK_SIZE);
|
||||||
|
const dim3 block_nums(nr, n_blocks_x, 1);
|
||||||
|
|
||||||
|
const float theta_scale = powf(freq_base, -2.0f/n_dims);
|
||||||
|
|
||||||
|
if (freq_factors == nullptr) {
|
||||||
|
rope_norm<T, false><<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
x, dst, ne0, n_dims, pos, freq_scale, p_delta_rows, ext_factor, attn_factor, corr_dims,
|
||||||
|
theta_scale, freq_factors
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
rope_norm<T, true><<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
x, dst, ne0, n_dims, pos, freq_scale, p_delta_rows, ext_factor, attn_factor, corr_dims,
|
||||||
|
theta_scale, freq_factors
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
static void rope_neox_cuda(
|
||||||
|
const T * x, T * dst, int ne0, int n_dims, int nr, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float freq_base, float ext_factor, float attn_factor, rope_corr_dims corr_dims, const float * freq_factors, cudaStream_t stream) {
|
||||||
|
GGML_ASSERT(ne0 % 2 == 0);
|
||||||
|
const dim3 block_dims(1, CUDA_ROPE_BLOCK_SIZE, 1);
|
||||||
|
const int n_blocks_x = (ne0 + 2*CUDA_ROPE_BLOCK_SIZE - 1) / (2*CUDA_ROPE_BLOCK_SIZE);
|
||||||
|
const dim3 block_nums(nr, n_blocks_x, 1);
|
||||||
|
|
||||||
|
const float theta_scale = powf(freq_base, -2.0f/n_dims);
|
||||||
|
|
||||||
|
if (freq_factors == nullptr) {
|
||||||
|
rope_neox<T, false><<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
x, dst, ne0, n_dims, pos, freq_scale, p_delta_rows, ext_factor, attn_factor, corr_dims,
|
||||||
|
theta_scale, freq_factors
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
rope_neox<T, true><<<block_nums, block_dims, 0, stream>>>(
|
||||||
|
x, dst, ne0, n_dims, pos, freq_scale, p_delta_rows, ext_factor, attn_factor, corr_dims,
|
||||||
|
theta_scale, freq_factors
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void rope_norm_cuda_f16(
|
||||||
|
const half * x, half * dst, int ne0, int n_dims, int nr, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float freq_base, float ext_factor, float attn_factor, rope_corr_dims corr_dims, const float * freq_factors, cudaStream_t stream) {
|
||||||
|
|
||||||
|
rope_norm_cuda<half>(x, dst, ne0, n_dims, nr, pos, freq_scale, p_delta_rows, freq_base, ext_factor, attn_factor, corr_dims, freq_factors, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void rope_norm_cuda_f32(
|
||||||
|
const float * x, float * dst, int ne0, int n_dims, int nr, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float freq_base, float ext_factor, float attn_factor, rope_corr_dims corr_dims, const float * freq_factors, cudaStream_t stream) {
|
||||||
|
|
||||||
|
rope_norm_cuda<float>(x, dst, ne0, n_dims, nr, pos, freq_scale, p_delta_rows, freq_base, ext_factor, attn_factor, corr_dims, freq_factors, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void rope_neox_cuda_f16(
|
||||||
|
const half * x, half * dst, int ne0, int n_dims, int nr, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float freq_base, float ext_factor, float attn_factor, rope_corr_dims corr_dims, const float * freq_factors, cudaStream_t stream) {
|
||||||
|
|
||||||
|
rope_neox_cuda<half>(x, dst, ne0, n_dims, nr, pos, freq_scale, p_delta_rows, freq_base, ext_factor, attn_factor, corr_dims, freq_factors, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void rope_neox_cuda_f32(
|
||||||
|
const float * x, float * dst, int ne0, int n_dims, int nr, const int32_t * pos, float freq_scale, int p_delta_rows,
|
||||||
|
float freq_base, float ext_factor, float attn_factor, rope_corr_dims corr_dims, const float * freq_factors, cudaStream_t stream
|
||||||
|
) {
|
||||||
|
|
||||||
|
rope_neox_cuda<float>(x, dst, ne0, n_dims, nr, pos, freq_scale, p_delta_rows, freq_base, ext_factor, attn_factor, corr_dims, freq_factors, stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_rope(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
const ggml_tensor * src2 = dst->src[2];
|
||||||
|
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
const float * src1_d = (const float *)src1->data;
|
||||||
|
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32 || src0->type == GGML_TYPE_F16);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32 || dst->type == GGML_TYPE_F16);
|
||||||
|
GGML_ASSERT(src0->type == dst->type);
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t ne01 = src0->ne[1];
|
||||||
|
const int64_t nr = ggml_nrows(src0);
|
||||||
|
|
||||||
|
//const int n_past = ((int32_t *) dst->op_params)[0];
|
||||||
|
const int n_dims = ((int32_t *) dst->op_params)[1];
|
||||||
|
const int mode = ((int32_t *) dst->op_params)[2];
|
||||||
|
//const int n_ctx = ((int32_t *) dst->op_params)[3];
|
||||||
|
const int n_ctx_orig = ((int32_t *) dst->op_params)[4];
|
||||||
|
|
||||||
|
// RoPE alteration for extended context
|
||||||
|
float freq_base;
|
||||||
|
float freq_scale;
|
||||||
|
float ext_factor;
|
||||||
|
float attn_factor;
|
||||||
|
float beta_fast;
|
||||||
|
float beta_slow;
|
||||||
|
|
||||||
|
memcpy(&freq_base, (int32_t *) dst->op_params + 5, sizeof(float));
|
||||||
|
memcpy(&freq_scale, (int32_t *) dst->op_params + 6, sizeof(float));
|
||||||
|
memcpy(&ext_factor, (int32_t *) dst->op_params + 7, sizeof(float));
|
||||||
|
memcpy(&attn_factor, (int32_t *) dst->op_params + 8, sizeof(float));
|
||||||
|
memcpy(&beta_fast, (int32_t *) dst->op_params + 9, sizeof(float));
|
||||||
|
memcpy(&beta_slow, (int32_t *) dst->op_params + 10, sizeof(float));
|
||||||
|
|
||||||
|
const bool is_neox = mode & GGML_ROPE_TYPE_NEOX;
|
||||||
|
|
||||||
|
const int32_t * pos = (const int32_t *) src1_d;
|
||||||
|
|
||||||
|
const float * freq_factors = nullptr;
|
||||||
|
if (src2 != nullptr) {
|
||||||
|
freq_factors = (const float *) src2->data;
|
||||||
|
}
|
||||||
|
|
||||||
|
rope_corr_dims corr_dims;
|
||||||
|
ggml_rope_yarn_corr_dims(n_dims, n_ctx_orig, freq_base, beta_fast, beta_slow, corr_dims.v);
|
||||||
|
|
||||||
|
// compute
|
||||||
|
if (is_neox) {
|
||||||
|
if (src0->type == GGML_TYPE_F32) {
|
||||||
|
rope_neox_cuda_f32(
|
||||||
|
(const float *)src0_d, (float *)dst_d, ne00, n_dims, nr, pos, freq_scale, ne01, freq_base, ext_factor,
|
||||||
|
attn_factor, corr_dims, freq_factors, stream
|
||||||
|
);
|
||||||
|
} else if (src0->type == GGML_TYPE_F16) {
|
||||||
|
rope_neox_cuda_f16(
|
||||||
|
(const half *)src0_d, (half *)dst_d, ne00, n_dims, nr, pos, freq_scale, ne01, freq_base, ext_factor,
|
||||||
|
attn_factor, corr_dims, freq_factors, stream
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (src0->type == GGML_TYPE_F32) {
|
||||||
|
rope_norm_cuda_f32(
|
||||||
|
(const float *)src0_d, (float *)dst_d, ne00, n_dims, nr, pos, freq_scale, ne01, freq_base, ext_factor,
|
||||||
|
attn_factor, corr_dims, freq_factors, stream
|
||||||
|
);
|
||||||
|
} else if (src0->type == GGML_TYPE_F16) {
|
||||||
|
rope_norm_cuda_f16(
|
||||||
|
(const half *)src0_d, (half *)dst_d, ne00, n_dims, nr, pos, freq_scale, ne01, freq_base, ext_factor,
|
||||||
|
attn_factor, corr_dims, freq_factors, stream
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
GGML_ABORT("fatal error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
31
llama/ggml-cuda/rope.cuh
Normal file
31
llama/ggml-cuda/rope.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_ROPE_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_rope(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
57
llama/ggml-cuda/scale.cu
Normal file
57
llama/ggml-cuda/scale.cu
Normal file
|
@ -0,0 +1,57 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "scale.cuh"
|
||||||
|
|
||||||
|
static __global__ void scale_f32(const float * x, float * dst, const float scale, const int k) {
|
||||||
|
const int i = blockDim.x*blockIdx.x + threadIdx.x;
|
||||||
|
|
||||||
|
if (i >= k) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
dst[i] = scale * x[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
static void scale_f32_cuda(const float * x, float * dst, const float scale, const int k, cudaStream_t stream) {
|
||||||
|
const int num_blocks = (k + CUDA_SCALE_BLOCK_SIZE - 1) / CUDA_SCALE_BLOCK_SIZE;
|
||||||
|
scale_f32<<<num_blocks, CUDA_SCALE_BLOCK_SIZE, 0, stream>>>(x, dst, scale, k);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_scale(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
float scale;
|
||||||
|
memcpy(&scale, dst->op_params, sizeof(float));
|
||||||
|
|
||||||
|
scale_f32_cuda(src0_d, dst_d, scale, ggml_nelements(src0), stream);
|
||||||
|
}
|
31
llama/ggml-cuda/scale.cuh
Normal file
31
llama/ggml-cuda/scale.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_SCALE_BLOCK_SIZE 256
|
||||||
|
|
||||||
|
void ggml_cuda_op_scale(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
232
llama/ggml-cuda/softmax.cu
Normal file
232
llama/ggml-cuda/softmax.cu
Normal file
|
@ -0,0 +1,232 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
#include "softmax.cuh"
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static __device__ __forceinline__ float t2f32(T val) {
|
||||||
|
return (float) val;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <>
|
||||||
|
__device__ float __forceinline__ t2f32<half>(half val) {
|
||||||
|
return __half2float(val);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <bool vals_smem, int ncols_template, int block_size_template, typename T>
|
||||||
|
static __global__ void soft_max_f32(const float * x, const T * mask, float * dst, const int ncols_par, const int nrows_y, const float scale, const float max_bias, const float m0, const float m1, uint32_t n_head_log2) {
|
||||||
|
const int ncols = ncols_template == 0 ? ncols_par : ncols_template;
|
||||||
|
|
||||||
|
const int tid = threadIdx.x;
|
||||||
|
const int rowx = blockIdx.x;
|
||||||
|
const int rowy = rowx % nrows_y; // broadcast the mask in the row dimension
|
||||||
|
|
||||||
|
const int block_size = block_size_template == 0 ? blockDim.x : block_size_template;
|
||||||
|
|
||||||
|
const int warp_id = threadIdx.x / WARP_SIZE;
|
||||||
|
const int lane_id = threadIdx.x % WARP_SIZE;
|
||||||
|
|
||||||
|
const float slope = get_alibi_slope(max_bias, rowx/nrows_y, n_head_log2, m0, m1);
|
||||||
|
|
||||||
|
extern __shared__ float data_soft_max_f32[];
|
||||||
|
float * buf_iw = data_soft_max_f32; // shared memory buffer for inter-warp communication
|
||||||
|
// shared memory buffer to cache values between iterations:
|
||||||
|
float * vals = vals_smem ? buf_iw + WARP_SIZE : dst + (int64_t)rowx*ncols;
|
||||||
|
|
||||||
|
float max_val = -INFINITY;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int col0 = 0; col0 < ncols; col0 += block_size) {
|
||||||
|
const int col = col0 + tid;
|
||||||
|
|
||||||
|
if (ncols_template == 0 && col >= ncols) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int64_t ix = (int64_t)rowx*ncols + col;
|
||||||
|
const int64_t iy = (int64_t)rowy*ncols + col;
|
||||||
|
|
||||||
|
const float val = x[ix]*scale + (mask ? slope*t2f32(mask[iy]) : 0.0f);
|
||||||
|
|
||||||
|
vals[col] = val;
|
||||||
|
max_val = max(max_val, val);
|
||||||
|
}
|
||||||
|
|
||||||
|
// find the max value in the block
|
||||||
|
max_val = warp_reduce_max(max_val);
|
||||||
|
if (block_size > WARP_SIZE) {
|
||||||
|
if (warp_id == 0) {
|
||||||
|
buf_iw[lane_id] = -INFINITY;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
if (lane_id == 0) {
|
||||||
|
buf_iw[warp_id] = max_val;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
max_val = buf_iw[lane_id];
|
||||||
|
max_val = warp_reduce_max(max_val);
|
||||||
|
}
|
||||||
|
|
||||||
|
float tmp = 0.0f; // partial sum
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int col0 = 0; col0 < ncols; col0 += block_size) {
|
||||||
|
const int col = col0 + tid;
|
||||||
|
|
||||||
|
if (ncols_template == 0 && col >= ncols) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const float val = expf(vals[col] - max_val);
|
||||||
|
tmp += val;
|
||||||
|
vals[col] = val;
|
||||||
|
}
|
||||||
|
|
||||||
|
// find the sum of exps in the block
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
if (block_size > WARP_SIZE) {
|
||||||
|
__syncthreads();
|
||||||
|
if (warp_id == 0) {
|
||||||
|
buf_iw[lane_id] = 0.0f;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
if (lane_id == 0) {
|
||||||
|
buf_iw[warp_id] = tmp;
|
||||||
|
}
|
||||||
|
__syncthreads();
|
||||||
|
|
||||||
|
tmp = buf_iw[lane_id];
|
||||||
|
tmp = warp_reduce_sum(tmp);
|
||||||
|
}
|
||||||
|
|
||||||
|
const float inv_sum = 1.0f / tmp;
|
||||||
|
|
||||||
|
#pragma unroll
|
||||||
|
for (int col0 = 0; col0 < ncols; col0 += block_size) {
|
||||||
|
const int col = col0 + tid;
|
||||||
|
|
||||||
|
if (ncols_template == 0 && col >= ncols) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const int64_t idst = (int64_t)rowx*ncols + col;
|
||||||
|
dst[idst] = vals[col] * inv_sum;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
static void soft_max_f32_cuda(const float * x, const T * mask, float * dst, const int ncols_x, const int nrows_x, const int nrows_y, const float scale, const float max_bias, cudaStream_t stream) {
|
||||||
|
int nth = WARP_SIZE;
|
||||||
|
while (nth < ncols_x && nth < CUDA_SOFT_MAX_BLOCK_SIZE) nth *= 2;
|
||||||
|
const dim3 block_dims(nth, 1, 1);
|
||||||
|
const dim3 block_nums(nrows_x, 1, 1);
|
||||||
|
const size_t shmem = (GGML_PAD(ncols_x, WARP_SIZE) + WARP_SIZE)*sizeof(float);
|
||||||
|
static_assert(CUDA_SOFT_MAX_BLOCK_SIZE == 1024, "These values need to be adjusted.");
|
||||||
|
|
||||||
|
const uint32_t n_head = nrows_x/nrows_y;
|
||||||
|
const uint32_t n_head_log2 = 1u << (uint32_t) floorf(log2f((float) n_head));
|
||||||
|
|
||||||
|
const float m0 = powf(2.0f, -(max_bias ) / n_head_log2);
|
||||||
|
const float m1 = powf(2.0f, -(max_bias / 2.0f) / n_head_log2);
|
||||||
|
|
||||||
|
// FIXME: this limit could be raised by ~2-4x on Ampere or newer
|
||||||
|
if (shmem < ggml_cuda_info().devices[ggml_cuda_get_device()].smpb) {
|
||||||
|
switch (ncols_x) {
|
||||||
|
case 32:
|
||||||
|
soft_max_f32<true, 32, 32><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 64:
|
||||||
|
soft_max_f32<true, 64, 64><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 128:
|
||||||
|
soft_max_f32<true, 128, 128><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 256:
|
||||||
|
soft_max_f32<true, 256, 256><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 512:
|
||||||
|
soft_max_f32<true, 512, 512><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 1024:
|
||||||
|
soft_max_f32<true, 1024, 1024><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 2048:
|
||||||
|
soft_max_f32<true, 2048, 1024><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
case 4096:
|
||||||
|
soft_max_f32<true, 4096, 1024><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
soft_max_f32<true, 0, 0><<<block_nums, block_dims, shmem, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
const size_t shmem_low = WARP_SIZE*sizeof(float);
|
||||||
|
soft_max_f32<false, 0, 0><<<block_nums, block_dims, shmem_low, stream>>>(x, mask, dst, ncols_x, nrows_y, scale, max_bias, m0, m1, n_head_log2);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_soft_max(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const ggml_tensor * src1 = dst->src[1];
|
||||||
|
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
const void * src1_d = src1 ? (const void *)src1->data : nullptr;
|
||||||
|
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
|
||||||
|
GGML_ASSERT(!src1 || src1->type == GGML_TYPE_F16 || src1->type == GGML_TYPE_F32); // src1 contains mask and it is optional
|
||||||
|
|
||||||
|
const int64_t ne00 = src0->ne[0];
|
||||||
|
const int64_t nrows_x = ggml_nrows(src0);
|
||||||
|
const int64_t nrows_y = src0->ne[1];
|
||||||
|
|
||||||
|
float scale = 1.0f;
|
||||||
|
float max_bias = 0.0f;
|
||||||
|
|
||||||
|
memcpy(&scale, (float *) dst->op_params + 0, sizeof(float));
|
||||||
|
memcpy(&max_bias, (float *) dst->op_params + 1, sizeof(float));
|
||||||
|
|
||||||
|
const bool use_f16 = (src1 && src1->type == GGML_TYPE_F16);
|
||||||
|
|
||||||
|
if (use_f16) {
|
||||||
|
const half * src1_dd = (const half *)src1_d;
|
||||||
|
|
||||||
|
soft_max_f32_cuda(src0_d, src1_dd, dst_d, ne00, nrows_x, nrows_y, scale, max_bias, stream);
|
||||||
|
} else {
|
||||||
|
const float * src1_dd = (const float *)src1_d;
|
||||||
|
|
||||||
|
soft_max_f32_cuda(src0_d, src1_dd, dst_d, ne00, nrows_x, nrows_y, scale, max_bias, stream);
|
||||||
|
}
|
||||||
|
}
|
31
llama/ggml-cuda/softmax.cuh
Normal file
31
llama/ggml-cuda/softmax.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
#define CUDA_SOFT_MAX_BLOCK_SIZE 1024
|
||||||
|
|
||||||
|
void ggml_cuda_op_soft_max(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
65
llama/ggml-cuda/sumrows.cu
Normal file
65
llama/ggml-cuda/sumrows.cu
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "sumrows.cuh"
|
||||||
|
|
||||||
|
static __global__ void k_sum_rows_f32(const float * x, float * dst, const int ncols) {
|
||||||
|
const int row = blockIdx.x;
|
||||||
|
const int col = threadIdx.x;
|
||||||
|
|
||||||
|
float sum = 0.0f;
|
||||||
|
for (int i = col; i < ncols; i += blockDim.x) {
|
||||||
|
sum += x[row * ncols + i];
|
||||||
|
}
|
||||||
|
|
||||||
|
sum = warp_reduce_sum(sum);
|
||||||
|
|
||||||
|
if (col == 0) {
|
||||||
|
dst[row] = sum;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void sum_rows_f32_cuda(const float * x, float * dst, const int ncols, const int nrows, cudaStream_t stream) {
|
||||||
|
const dim3 block_dims(WARP_SIZE, 1, 1);
|
||||||
|
const dim3 block_nums(nrows, 1, 1);
|
||||||
|
k_sum_rows_f32<<<block_nums, block_dims, 0, stream>>>(x, dst, ncols);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ggml_cuda_op_sum_rows(ggml_backend_cuda_context & ctx, ggml_tensor * dst) {
|
||||||
|
const ggml_tensor * src0 = dst->src[0];
|
||||||
|
const float * src0_d = (const float *)src0->data;
|
||||||
|
float * dst_d = (float *)dst->data;
|
||||||
|
cudaStream_t stream = ctx.stream();
|
||||||
|
|
||||||
|
GGML_ASSERT(src0->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT( dst->type == GGML_TYPE_F32);
|
||||||
|
GGML_ASSERT(ggml_is_contiguous(src0));
|
||||||
|
|
||||||
|
const int64_t ncols = src0->ne[0];
|
||||||
|
const int64_t nrows = ggml_nrows(src0);
|
||||||
|
|
||||||
|
sum_rows_f32_cuda(src0_d, dst_d, ncols, nrows, stream);
|
||||||
|
}
|
31
llama/ggml-cuda/sumrows.cuh
Normal file
31
llama/ggml-cuda/sumrows.cuh
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "common.cuh"
|
||||||
|
|
||||||
|
void sum_rows_f32_cuda(const float * x, float * dst, const int ncols, const int nrows, cudaStream_t stream);
|
||||||
|
|
||||||
|
void ggml_cuda_op_sum_rows(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_F16);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_0);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q4_1);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_0);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q5_1);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_F16, GGML_TYPE_Q8_0);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_F16);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_0);
|
|
@ -0,0 +1,31 @@
|
||||||
|
/**
|
||||||
|
* llama.cpp - commit 8962422b1c6f9b8b15f5aeaea42600bcc2d44177 - do not edit this file
|
||||||
|
*
|
||||||
|
* MIT License
|
||||||
|
*
|
||||||
|
* Copyright (c) 2023-2024 The ggml authors
|
||||||
|
*
|
||||||
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
* of this software and associated documentation files (the "Software"), to deal
|
||||||
|
* in the Software without restriction, including without limitation the rights
|
||||||
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
* copies of the Software, and to permit persons to whom the Software is
|
||||||
|
* furnished to do so, subject to the following conditions:
|
||||||
|
*
|
||||||
|
* The above copyright notice and this permission notice shall be included in all
|
||||||
|
* copies or substantial portions of the Software.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
* SOFTWARE.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
|
||||||
|
|
||||||
|
#include "../fattn-vec-f16.cuh"
|
||||||
|
|
||||||
|
DECL_FATTN_VEC_F16_CASE(128, GGML_TYPE_Q4_0, GGML_TYPE_Q4_1);
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue