ollama/integration
Daniel Hiltgen cc269ba094 Remove no longer supported max vram var
The OLLAMA_MAX_VRAM env var was a temporary workaround for OOM
scenarios.  With Concurrency this was no longer wired up, and the simplistic
value doesn't map to multi-GPU setups.  Users can still set `num_gpu`
to limit memory usage to avoid OOM if we get our predictions wrong.
2024-07-22 09:08:11 -07:00
..
basic_test.go Local unicode test case 2024-04-22 19:29:12 -07:00
concurrency_test.go Remove no longer supported max vram var 2024-07-22 09:08:11 -07:00
context_test.go Fix context exhaustion integration test for small gpus 2024-07-09 16:24:14 -07:00
embed_test.go Introduce /api/embed endpoint supporting batch embedding (#5127) 2024-07-15 12:14:24 -07:00
llm_image_test.go refined test timing 2024-06-14 14:51:40 -07:00
llm_test.go Request and model concurrency 2024-04-22 19:29:12 -07:00
max_queue_test.go Skip max queue test on remote 2024-05-16 16:24:18 -07:00
README.md Revamp go based integration tests 2024-03-23 14:24:18 +01:00
utils_test.go refined test timing 2024-06-14 14:51:40 -07:00

Integration Tests

This directory contains integration tests to exercise Ollama end-to-end to verify behavior

By default, these tests are disabled so go test ./... will exercise only unit tests. To run integration tests you must pass the integration tag. go test -tags=integration ./...

The integration tests have 2 modes of operating.

  1. By default, they will start the server on a random port, run the tests, and then shutdown the server.
  2. If OLLAMA_TEST_EXISTING is set to a non-empty string, the tests will run against an existing running server, which can be remote