Commit graph

  • 57942b4676
    Update README.md - Community Integrations - Ollama for Ruby (#1830) Guilherme Baptista 2024-01-07 00:31:39 -03:00
  • e0d05b0f1e Accept windows paths for image processing Daniel Hiltgen 2024-01-06 10:50:27 -08:00
  • 2d9dd14f27
    Merge pull request #1697 from dhiltgen/win_docs Daniel Hiltgen 2024-01-05 19:34:20 -08:00
  • 1caa56128f add cuda lib path for nvidia container toolkit Jeffrey Morgan 2024-01-05 21:10:34 -05:00
  • 0101e76dbe
    Merge pull request #1797 from sublimator/nd-allow-extension-origins-still-needs-explicit-listing-2024-01-05 Michael Yang 2024-01-05 17:20:09 -08:00
  • 2ef9352b94 fix(cmd): history in alt mode Michael Yang 2024-01-05 16:19:37 -08:00
  • 5580ae2472 fix: set template without triple quotes Michael Yang 2024-01-05 15:51:33 -08:00
  • 3a9f447141
    only pull gguf model if already exists (#1817) Bruce MacDonald 2024-01-05 18:50:00 -05:00
  • 9c2941e61b
    switch api for ShowRequest to use the name field (#1816) Patrick Devine 2024-01-05 15:06:43 -08:00
  • 238ac5e765
    Add unit tests for Parser (#1815) Patrick Devine 2024-01-05 14:04:31 -08:00
  • 4f4980b66b
    simplify ggml update logic (#1814) Bruce MacDonald 2024-01-05 15:22:32 -05:00
  • 22e93efa41 add show info command and fix the modelfile Patrick Devine 2024-01-04 17:23:11 -08:00
  • 2909dce894 split up interactive generation Patrick Devine 2024-01-04 15:20:26 -08:00
  • df32537312
    gpu: read memory info from all cuda devices (#1802) Jeffrey Morgan 2024-01-05 11:25:58 -05:00
  • 3367b5f3df
    remove unused generate patches (#1810) Bruce MacDonald 2024-01-05 11:25:45 -05:00
  • 46edbbc518
    Merge pull request #1801 from jmorganca/mattw/correctdockerlink Matt Williams 2024-01-04 19:20:45 -08:00
  • d2ff18cd6b
    Merge pull request #1791 from jmorganca/mxyng/update-build Michael Yang 2024-01-04 19:13:44 -08:00
  • df086d3c8c fix docker doc to point to hub Matt Williams 2024-01-04 18:42:23 -08:00
  • 8baaaa39c0 Allow extension origins (still needs explicit listing), fixes #1686 Nicholas Dudfield 2024-01-05 08:55:47 +07:00
  • f9961c70ae update build Michael Yang 2024-01-04 13:25:38 -08:00
  • cd8fad3398
    Merge pull request #1790 from dhiltgen/llm_code_shuffle Daniel Hiltgen 2024-01-04 13:47:25 -08:00
  • 9983fa5f4e Cleaup stale submodule Daniel Hiltgen 2024-01-04 13:40:16 -08:00
  • dfda91c2ee
    Merge pull request #1788 from dhiltgen/llm_code_shuffle Daniel Hiltgen 2024-01-04 13:14:28 -08:00
  • fac9060da5 Init submodule with new path Daniel Hiltgen 2024-01-04 09:54:46 -08:00
  • a554616f8e remove old llama.cpp submodule path Daniel Hiltgen 2024-01-04 09:47:48 -08:00
  • 77d96da94b Code shuffle to clean up the llm dir Daniel Hiltgen 2024-01-04 09:40:15 -08:00
  • 0d6e3565ae
    Add embeddings to API (#1773) Brian Murray 2024-01-04 13:00:52 -07:00
  • b5939008a1
    Merge pull request #1785 from dhiltgen/win_native_cli Daniel Hiltgen 2024-01-04 08:55:01 -08:00
  • e9ce91e9a6 Load dynamic cpu lib on windows Daniel Hiltgen 2024-01-04 08:41:41 -08:00
  • 4ad6c9b11f
    fix: pull either original model or from model on create (#1774) Bruce MacDonald 2024-01-04 01:34:38 -05:00
  • c0285158a9 tweak memory requirements error text Jeffrey Morgan 2024-01-03 19:47:18 -05:00
  • 77a66df72c add macOS memory check for 47B models Jeffrey Morgan 2024-01-03 19:46:16 -05:00
  • 5b4837f881 remove unused filetype check Jeffrey Morgan 2024-01-03 19:45:39 -05:00
  • 29340c2e62
    update cmake flags for amd64 macOS (#1780) Jeffrey Morgan 2024-01-03 19:22:15 -05:00
  • d5ec730354
    Merge pull request #1779 from dhiltgen/refined_amd_gpu_list Daniel Hiltgen 2024-01-03 16:18:57 -08:00
  • 8bed487aba
    Merge pull request #1778 from dhiltgen/wsl1 Daniel Hiltgen 2024-01-03 16:18:41 -08:00
  • c1a10a6e9b
    Merge pull request #1781 from dhiltgen/cpu_only_build Daniel Hiltgen 2024-01-03 16:18:25 -08:00
  • ddbfa6fe31 Fix CPU only builds Daniel Hiltgen 2024-01-03 16:08:34 -08:00
  • 2fcd41ef81 Fail fast on WSL1 while allowing on WSL2 Daniel Hiltgen 2024-01-03 15:06:07 -08:00
  • 16f4603b67 Improve maintainability of Radeon card list Daniel Hiltgen 2024-01-03 15:12:29 -08:00
  • 1184686649
    Merge pull request #1776 from dhiltgen/render_group Daniel Hiltgen 2024-01-03 13:07:54 -08:00
  • 2588cb2daa Add ollama user to render group for Radeon support Daniel Hiltgen 2024-01-03 12:55:54 -08:00
  • c7ea8f237e
    set num_gpu to 1 only by default on darwin arm64 (#1771) Jeffrey Morgan 2024-01-03 14:10:29 -05:00
  • 0b3118e0af
    fix: relay request opts to loaded llm prediction (#1761) Bruce MacDonald 2024-01-03 12:01:42 -05:00
  • 05face44ef
    Merge pull request #1683 from dhiltgen/fix_windows_test Daniel Hiltgen 2024-01-03 09:00:39 -08:00
  • a2ad952440 Fix windows system memory lookup Daniel Hiltgen 2023-12-22 15:43:31 -08:00
  • 5fea4410be
    Merge pull request #1680 from dhiltgen/better_patching Daniel Hiltgen 2024-01-03 08:10:17 -08:00
  • b846eb64d0
    Fix template api doc description (#1661) Bruce MacDonald 2024-01-03 11:00:59 -05:00
  • 3c5dd9ed1d
    Update README.md (#1766) Cole Gillespie 2024-01-03 10:44:22 -05:00
  • b17ccd0542
    Update import.md Jeffrey Morgan 2024-01-02 22:28:18 -05:00
  • d0409f772f
    keyboard shortcut help (#1764) Patrick Devine 2024-01-02 18:04:12 -08:00
  • ec261422af use docker build in build scripts Jeffrey Morgan 2024-01-02 19:32:54 -05:00
  • 0498f7ce56 Get rid of one-line llama.log Daniel Hiltgen 2023-12-30 14:59:48 -08:00
  • 738a8d12eb Rename the ollama cmakefile Daniel Hiltgen 2023-12-24 14:12:21 -08:00
  • d966b730ac Switch windows build to fully dynamic Daniel Hiltgen 2023-12-23 11:35:44 -08:00
  • 9a70aecccb Refactor how we augment llama.cpp Daniel Hiltgen 2023-12-22 09:51:53 -08:00
  • 22cd5eaab6
    Added Ollama-SwiftUI to integrations (#1747) Karim ElGhandour 2024-01-02 15:47:50 +01:00
  • 304a8799ca
    Update README.md (#1757) Dane Madsen 2024-01-03 00:47:08 +10:00
  • 2a2fa3c329 api.md cleanup & formatting Jeffrey Morgan 2023-12-27 14:32:35 -05:00
  • 55978c1dc9 clean up cache api option Jeffrey Morgan 2023-12-27 14:27:45 -05:00
  • d4ebdadbe7 enable cache_prompt by default Jeffrey Morgan 2023-12-27 14:23:42 -05:00
  • e201efa14b Add windows native build instructions Daniel Hiltgen 2023-12-24 09:02:18 -08:00
  • c5f21f73a4
    follow best practices by adding resp.Body.Close() (#1708) Icelain 2023-12-25 19:31:37 +05:30
  • 371bc73531
    Update README.md Jeffrey Morgan 2023-12-24 11:54:08 -05:00
  • c651d8b824
    Update README.md Jeffrey Morgan 2023-12-23 11:18:12 -05:00
  • cf50ef5b51
    Merge pull request #1684 from dhiltgen/tag_integration_tests Daniel Hiltgen 2023-12-22 16:43:41 -08:00
  • 697bea6939 Guard integration tests with a tag Daniel Hiltgen 2023-12-22 16:33:27 -08:00
  • 10da41d677
    Add Cache flag to api (#1642) K0IN 2023-12-22 23:16:20 +01:00
  • db356c8519
    post-response templating (#1427) Bruce MacDonald 2023-12-22 17:07:05 -05:00
  • b80081022f cache docker builds in build_linux.sh Jeffrey Morgan 2023-12-22 16:01:20 -05:00
  • 790457398a
    Merge pull request #1677 from jmorganca/mattw/docrunupdate Matt Williams 2023-12-22 09:56:27 -08:00
  • 511069a2a5 update where are models stored q Matt Williams 2023-12-22 09:48:44 -08:00
  • 5a85070c22
    Update readmes, requirements, packagejsons, etc for all examples (#1452) Matt Williams 2023-12-22 09:10:41 -08:00
  • 291700c92d
    Clean up documentation (#1506) Matt Williams 2023-12-22 09:10:01 -08:00
  • 9db28af84e
    Merge pull request #1675 from dhiltgen/less_verbose Daniel Hiltgen 2023-12-22 08:57:17 -08:00
  • e5202eb687 Quiet down llama.cpp logging by default Daniel Hiltgen 2023-12-22 08:47:18 -08:00
  • 96fb441abd
    Merge pull request #1146 from dhiltgen/ext_server_cgo Daniel Hiltgen 2023-12-22 08:16:31 -08:00
  • 495c06e4a6 Fix doc glitch Daniel Hiltgen 2023-12-21 16:57:58 -08:00
  • fa24e73b82 Remove CPU build, fixup linux build script Daniel Hiltgen 2023-12-21 16:54:54 -08:00
  • 325d74985b Fix CPU performance on hyperthreaded systems Daniel Hiltgen 2023-12-21 16:23:36 -08:00
  • fabf2f3467
    allow for starting llava queries with filepath (#1549) Bruce MacDonald 2023-12-21 13:20:59 -05:00
  • d9cd3d9667 Revive windows build Daniel Hiltgen 2023-12-20 14:46:15 -08:00
  • a607d922f0
    add FAQ for slow networking in WSL2 (#1646) Patrick Devine 2023-12-20 16:27:24 -08:00
  • 7555ea44f8 Revamp the dynamic library shim Daniel Hiltgen 2023-12-20 10:36:01 -08:00
  • df06812494
    Update api.md Jeffrey Morgan 2023-12-20 08:47:53 -05:00
  • 1d1eb1688c Additional nvidial-ml path to check Daniel Hiltgen 2023-12-19 15:52:34 -08:00
  • 23dc179350
    Merge pull request #1619 from jmorganca/mxyng/fix-version-test Michael Yang 2023-12-19 15:48:52 -08:00
  • 63aac0edc5 fix(test): use real version string for comparison Michael Yang 2023-12-19 15:02:37 -08:00
  • 6558f94ed0 Fix darwin intel build Daniel Hiltgen 2023-12-19 13:32:24 -08:00
  • 1ca484f67e
    Add Langchain Dart library (#1564) Erick Ghaumez 2023-12-19 20:04:52 +01:00
  • 72b0c32fe9
    Update README.md Jeffrey Morgan 2023-12-19 12:59:22 -05:00
  • 68c28224f8
    Update README.md Jeffrey Morgan 2023-12-19 12:59:03 -05:00
  • 54dbfa4c4a Carry ggml-metal.metal as payload Daniel Hiltgen 2023-12-18 18:32:04 -08:00
  • 5646826a79 Add WSL2 path to nvidia-ml.so library Daniel Hiltgen 2023-12-15 20:16:02 -08:00
  • 3269535a4c Refine handling of shim presence Daniel Hiltgen 2023-12-15 14:27:27 -08:00
  • 1b991d0ba9 Refine build to support CPU only Daniel Hiltgen 2023-12-13 17:26:47 -08:00
  • 51082535e1 Add automated test for multimodal Daniel Hiltgen 2023-12-13 14:29:09 -08:00
  • 9adca7f711 Bump llama.cpp to b1662 and set n_parallel=1 Daniel Hiltgen 2023-12-14 10:25:12 -08:00
  • 89bbaafa64 Build linux using ubuntu 20.04 Daniel Hiltgen 2023-12-18 12:05:59 -08:00
  • 35934b2e05 Adapted rocm support to cgo based llama.cpp Daniel Hiltgen 2023-11-29 11:00:37 -08:00