ollama/integration
Daniel Hiltgen 90ca84172c
Fix embeddings memory corruption (#6467)
* Fix embeddings memory corruption

The patch was leading to a buffer overrun corruption.  Once removed though, parallism
in server.cpp lead to hitting an assert due to slot/seq IDs being >= token count.  To
work around this, only use slot 0 for embeddings.

* Fix embed integration test assumption

The token eval count has changed with recent llama.cpp bumps (0.3.5+)
2024-08-22 14:51:42 -07:00
..
basic_test.go int 2024-07-22 11:30:07 -07:00
concurrency_test.go fix concurrency test 2024-08-05 16:36:16 -07:00
context_test.go
embed_test.go Fix embeddings memory corruption (#6467) 2024-08-22 14:51:42 -07:00
llm_image_test.go
llm_test.go fix concurrency test 2024-08-05 16:36:16 -07:00
max_queue_test.go fix concurrency test 2024-08-05 16:36:16 -07:00
README.md
utils_test.go fix concurrency test 2024-08-05 16:36:16 -07:00

Integration Tests

This directory contains integration tests to exercise Ollama end-to-end to verify behavior

By default, these tests are disabled so go test ./... will exercise only unit tests. To run integration tests you must pass the integration tag. go test -tags=integration ./...

The integration tests have 2 modes of operating.

  1. By default, they will start the server on a random port, run the tests, and then shutdown the server.
  2. If OLLAMA_TEST_EXISTING is set to a non-empty string, the tests will run against an existing running server, which can be remote