Commit graph

  • 4b01a873ef
    server: Support none defaulting to infinity for completions (#111) swg 2023-12-22 14:05:13 -0500
  • 99ff175562 Check if completion_tokens is none in error handler. Andrei Betlen 2023-12-22 13:41:06 -0500
  • 12b7f2f4e9
    [Feat] Multi model support (#931) Dave 2023-12-22 11:51:25 +0100
  • 4a85442c35 Update llama.cpp Andrei Betlen 2023-12-22 00:12:37 -0500
  • 2f03fb0231
    fix text_offset of multi-token characters (#1037) twaka 2023-12-22 14:03:29 +0900
  • 33cc623346
    Implement openai api compatible authentication (#1010) docmeth02 2023-12-21 19:44:49 +0100
  • 788394c096 Update llama.cpp Andrei Betlen 2023-12-21 13:16:46 -0500
  • ffceb772d1 Update llama.cpp Andrei Betlen 2023-12-19 17:05:40 -0500
  • a05b4da80a fix: float32 is not JSON serializable when streaming logits. Andrei Betlen 2023-12-18 18:40:36 -0500
  • abda047284 Update changelog Andrei Betlen 2023-12-18 18:16:17 -0500
  • 7df6c32544 Fix type annotations Andrei Betlen 2023-12-18 18:14:53 -0500
  • b703aad79e Fix type annotation Andrei Betlen 2023-12-18 18:13:37 -0500
  • d0aedfcff6 Fix type annotation Andrei Betlen 2023-12-18 18:12:49 -0500
  • 2993936b10
    Fix ctypes definitions of llama_kv_cache_view_update and llama_kv_cache_view_free. (#1028) Eduard Christian Dumitrescu 2023-12-18 18:11:26 -0500
  • 5e863d8a3b Bump version Andrei Betlen 2023-12-18 16:09:18 -0500
  • cfd698c75c
    Update low_level_api_llama_cpp.py to match current API (#1023) Jonathan Soma 2023-12-18 15:59:11 -0500
  • 095c650006 Add offload_kqv option to llama and server Andrei Betlen 2023-12-18 15:36:09 -0500
  • 472b344ae3 Remove unnused import Andrei Betlen 2023-12-18 15:32:40 -0500
  • 2fc48c54be Update llama.cpp Andrei Betlen 2023-12-18 15:32:15 -0500
  • 6b2e0e05b4
    perf: Don't convert logprobs arrays to lists (#1021) kddubey 2023-12-18 11:28:12 -0800
  • 62944df142
    Bugfix: Remove f16_kv, add offload_kqv field (#1019) Brandon Roberts 2023-12-18 12:27:11 -0700
  • 37da8e863a
    Update README.md functionary demo typo (#996) evelynmitchell 2023-12-16 17:00:30 -0700
  • f1c631dc53
    Bug fixed with n_ctx=0 (#1015) Daniele Morotti 2023-12-17 00:59:50 +0100
  • 5a8944672f
    Fix logits_to_logprobs for 2-D and 3-D logits (#1002) kddubey 2023-12-16 15:59:26 -0800
  • 534b1ea9b5 Update llama.cpp Andrei Betlen 2023-12-16 18:57:43 -0500
  • cbce061ffd Bump version Andrei Betlen 2023-12-13 21:52:29 -0500
  • 8b4db732bd
    Add qwen chat format (#1005) yhfgyyf 2023-12-14 10:43:43 +0800
  • 690c563b60 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-12-13 21:43:19 -0500
  • c0fc0a1e82 Update llama.cpp Andrei Betlen 2023-12-13 21:43:16 -0500
  • 8e44a32075
    Add support for running the server with SSL (#994) Radoslav Gerganov 2023-12-12 03:47:11 +0200
  • ef22e478db
    Replace logits_to_logprobs implementation with numpy equivalent to llama.cpp (#991) Tanner Hobson 2023-12-11 20:46:27 -0500
  • ac35f68e4d
    Fix UnsupportedOperation: fileno in suppress_stdout_stderr (#961) zocainViken 2023-12-12 02:44:51 +0100
  • b938cccf05
    Add Pygmalion chat format (#986) chiensen 2023-12-12 09:44:04 +0800
  • 6bbeea07ae
    README.md multimodal params fix (#967) zocainViken 2023-12-12 02:41:38 +0100
  • c1d92ce680
    fix minor typo (#958) Aniket Maurya 2023-12-12 01:40:38 +0000
  • e9bc4c4baf Fix docker build Andrei Betlen 2023-12-11 10:39:51 -0500
  • c1e73e73a3 Bump version Andrei Betlen 2023-12-11 10:26:42 -0500
  • ec26f364cc Remove f16_kv Andrei Betlen 2023-12-11 10:25:37 -0500
  • f1edc66b21 Update llama.cpp Andrei Betlen 2023-12-11 10:21:35 -0500
  • f3b844ed0a Update llama.cpp Andrei Betlen 2023-11-29 05:40:22 -0500
  • b069d06346
    Fix #891 (#952) kddubey 2023-11-29 02:39:52 -0800
  • ad963a0961 Bump version Andrei Betlen 2023-11-28 04:58:20 -0500
  • e3941d9c67 Make building llava optional Andrei Betlen 2023-11-28 04:55:21 -0500
  • 74f1949206 Update llama.cpp Andrei Betlen 2023-11-28 04:54:51 -0500
  • fb32f9d438 docs: Update README Andrei Betlen 2023-11-28 03:15:01 -0500
  • 43e006a291 docs: Remove divider Andrei Betlen 2023-11-28 02:41:50 -0500
  • 2cc6c9ae2f docs: Update README, add FAQ Andrei Betlen 2023-11-28 02:37:34 -0500
  • 7f3704b896 Bump version Andrei Betlen 2023-11-27 19:14:25 -0500
  • f99b2385ee Update llama.cpp Andrei Betlen 2023-11-27 19:03:10 -0500
  • 396dbf0b2b docs: Improve low-level docstrings Andrei Betlen 2023-11-27 19:03:02 -0500
  • 9c68b1804a docs: Add api reference links in README Andrei Betlen 2023-11-27 18:54:07 -0500
  • 174ef3ddf6 docs: Add headings to API reference Andrei Betlen 2023-11-27 18:42:15 -0500
  • 41428244f0 docs: Fix README indentation Andrei Betlen 2023-11-27 18:29:13 -0500
  • 1539146a5e docs: Fix README docs link Andrei Betlen 2023-11-27 18:21:00 -0500
  • a928893d03 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-11-26 15:57:13 -0500
  • 6308f21d5e docs: Update Llama docs Andrei Betlen 2023-11-26 15:56:40 -0500
  • aa5a7a1880
    Update README.md (#940) Anton Vice 2023-11-27 04:39:38 +0800
  • c2d63a7148
    fix: Typo in the Open Orca chat format #874 (#947) Gardner Bickford 2023-11-27 09:39:18 +1300
  • f03a38e62a Update llama.cpp Andrei Betlen 2023-11-26 15:38:22 -0500
  • 1a7bf2037b docs: Update openapi endpoint names Andrei Betlen 2023-11-24 03:39:29 -0500
  • 4026166e68 docs: Update completion and chat_completion parameter docstrings Andrei Betlen 2023-11-24 03:24:19 -0500
  • 945e20fa2c docs: update link Andrei Betlen 2023-11-24 00:18:32 -0500
  • e6a36b840e docs: edit function calling docs Andrei Betlen 2023-11-24 00:17:54 -0500
  • 8c3aa7858b Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-11-24 00:15:36 -0500
  • 19e02f1f87 docs: Add link to function calling notebook Andrei Betlen 2023-11-24 00:15:02 -0500
  • de2e2bc083 misc fix verbose printing in functionary model Andrei Betlen 2023-11-23 20:14:23 -0500
  • 36048d46af Update llama.cpp Andrei Betlen 2023-11-23 16:26:00 -0500
  • d68fc07b1b
    Add Zephyr format (#937) mrfakename 2023-11-22 22:20:08 -0800
  • 4184835078
    Add chat format to support baichuan (#938) caiyesd 2023-11-23 14:19:50 +0800
  • 4474157949 ci: tag built docker images with current version Andrei Betlen 2023-11-23 01:06:47 -0500
  • 21abefa488 docs: Add grammar and types to api reference Andrei Betlen 2023-11-23 00:27:41 -0500
  • 6aab77de04 docs: Fix module import bug Andrei Betlen 2023-11-23 00:27:22 -0500
  • c647f01609 Add from_json_schema to LlamaGrammar Andrei Betlen 2023-11-23 00:27:00 -0500
  • be1f64d569 docs: Add docstrings from llama.cpp Andrei Betlen 2023-11-23 00:26:26 -0500
  • 31cf0ec680 docs: Fix mkdocstrings heading level Andrei Betlen 2023-11-22 23:45:19 -0500
  • e349f314b4 docs: Fix API Reference page Andrei Betlen 2023-11-22 23:45:02 -0500
  • b6bb7ac76a docs: Add Llama class example Andrei Betlen 2023-11-22 23:10:04 -0500
  • c5173b0fb3 docs: Configure mkdocstrings Andrei Betlen 2023-11-22 23:09:42 -0500
  • 3303ebe92b docs: Add dark mode and pymarkdown extensions Andrei Betlen 2023-11-22 22:47:22 -0500
  • abb1976ad7 docs: Add n_ctx not for multimodal models Andrei Betlen 2023-11-22 21:07:00 -0500
  • 36679a58ef Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-11-22 19:49:59 -0500
  • bd43fb2bfe docs: Update high-level python api examples in README to include chat formats, function calling, and multi-modal models. Andrei Betlen 2023-11-22 19:49:56 -0500
  • d977b44d82 docs: Add links to server functionality Andrei Betlen 2023-11-22 18:21:02 -0500
  • aa815d580c docs: Link to langchain docs Andrei Betlen 2023-11-22 18:17:49 -0500
  • 357e4dd69f docs: Use nav for better site layout control Andrei Betlen 2023-11-22 18:16:30 -0500
  • 602ea64ddd docs: Fix whitespace Andrei Betlen 2023-11-22 18:09:31 -0500
  • 971864ce92 docs: Watch README for changes during docs development Andrei Betlen 2023-11-22 18:08:17 -0500
  • f336eebb2f docs: fix 404 to macos installation guide. Closes #861 Andrei Betlen 2023-11-22 18:07:30 -0500
  • 1ff2c92720 docs: minor indentation fix Andrei Betlen 2023-11-22 18:04:18 -0500
  • 68238b7883 docs: setting n_gqa is no longer required Andrei Betlen 2023-11-22 18:01:54 -0500
  • 198178225c docs: Remove stale warning Andrei Betlen 2023-11-22 17:59:16 -0500
  • 5a9770a56b
    Improve documentation for server chat formats (#934) Juraj Bednar 2023-11-22 12:10:03 +0100
  • b8f29f4bf0
    Add baichuan-2 chat format (#936) caiyesd 2023-11-22 19:08:06 +0800
  • 9515467439 tests: add mock_kv_cache placeholder functions Andrei Betlen 2023-11-22 06:02:21 -0500
  • 0ea244499e tests: avoid constantly reallocating logits Andrei Betlen 2023-11-22 04:31:05 -0500
  • 0a7e05bc10 tests: don't mock sampling functions Andrei Betlen 2023-11-22 04:12:32 -0500
  • d7388f1ffb Use mock_llama for all tests Andrei Betlen 2023-11-21 18:13:19 -0500
  • dbfaf53fe0 Update llama.cpp Andrei Betlen 2023-11-21 18:12:38 -0500
  • 8b6ca22846 Fix type warnings for json schema grammar converter Andrei Betlen 2023-11-21 13:32:00 -0500
  • 230fc8b535 Bump version Andrei Betlen 2023-11-21 05:04:55 -0500