ollama/docs/development.md

137 lines
3.8 KiB
Markdown
Raw Normal View History

2023-06-27 17:46:46 +00:00
# Development
2023-07-07 16:59:24 +00:00
Install required tools:
2023-06-27 17:46:46 +00:00
- cmake version 3.24 or higher
2024-01-18 19:51:34 +00:00
- go version 1.21 or higher
- gcc version 11.4.0 or higher
```bash
brew install go cmake gcc
2023-06-27 17:46:46 +00:00
```
Optionally enable debugging and more verbose logging:
```bash
# At build time
export CGO_CFLAGS="-g"
# At runtime
export OLLAMA_DEBUG=1
```
Get the required libraries and build the native LLM code:
```bash
go generate ./...
```
Then build ollama:
2023-06-27 17:46:46 +00:00
```bash
go build .
2023-06-27 17:46:46 +00:00
```
2023-07-07 16:59:24 +00:00
Now you can run `ollama`:
2023-06-27 17:46:46 +00:00
```bash
2023-07-07 16:59:24 +00:00
./ollama
2023-06-27 17:46:46 +00:00
```
2023-12-24 17:02:18 +00:00
### Linux
2023-12-24 17:02:18 +00:00
#### Linux CUDA (NVIDIA)
*Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!*
Install `cmake` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
development and runtime packages.
Typically the build scripts will auto-detect CUDA, however, if your Linux distro
or installation approach uses unusual paths, you can specify the location by
specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
libraries, and `CUDACXX` to the location of the nvcc compiler.
Then generate dependencies:
2023-12-24 17:02:18 +00:00
```
go generate ./...
```
2023-12-24 17:02:18 +00:00
Then build the binary:
2023-12-24 17:02:18 +00:00
```
go build .
```
2023-12-24 17:02:18 +00:00
#### Linux ROCm (AMD)
*Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!*
Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) developement packages first, as well as `cmake` and `golang`.
Typically the build scripts will auto-detect ROCm, however, if your Linux distro
or installation approach uses unusual paths, you can specify the location by
specifying an environment variable `ROCM_PATH` to the location of the ROCm
install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
CLBlast install (typically `/usr/lib/cmake/CLBlast`).
2023-12-24 17:02:18 +00:00
```
go generate ./...
```
2023-12-24 17:02:18 +00:00
Then build the binary:
2023-12-24 17:02:18 +00:00
```
go build .
```
ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the `render` group, or run as root.
#### Advanced CPU Settings
By default, running `go generate ./...` will compile a few different variations
of the LLM library based on common CPU families and vector math capabilities,
including a lowest-common-denominator which should run on almost any 64 bit CPU
somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
load. If you would like to build a CPU-based build customized for your
processor, you can set `OLLAMA_CUSTOM_CPU_DEFS` to the llama.cpp flags you would
like to use. For example, to compile an optimized binary for an Intel i9-9880H,
you might use:
```
OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on" go generate ./...
go build .
```
2023-12-24 17:02:18 +00:00
#### Containerized Linux Build
If you have Docker available, you can build linux binaries with `./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included. The resulting binary is placed in `./dist`
### Windows
Note: The windows build for Ollama is still under development.
Install required tools:
- MSVC toolchain - C/C++ and cmake as minimal requirements
2024-01-18 19:51:34 +00:00
- go version 1.21 or higher
2023-12-24 17:02:18 +00:00
- MinGW (pick one variant) with GCC.
- <https://www.mingw-w64.org/>
- <https://www.msys2.org/>
```powershell
$env:CGO_ENABLED="1"
go generate ./...
go build .
```
#### Windows CUDA (NVIDIA)
In addition to the common Windows development tools described above, install:
2023-12-24 17:02:18 +00:00
- [NVIDIA CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)