f2890a4494
* fix(ext_server): Port llama.cpp sampling refactors to ext_server
This was a fairly large changeset. I closely followed the changes here:
df270ef745
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(server.cpp): Refactor server.cpp logging for llama.cpp overhaul
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Bump llama.cpp to the latest master with `granite` support
This does not yet have granite MoE support, but that can come in a
follow up PR
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(patches): Update all patches (except solar-pro) to work with bumped llama.cpp
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(solar): Update solar patch for llama.cpp bump
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat(llama.cpp): Bump llama.cpp for granitemoe support
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat(llama.cpp): Bump llama.cpp for granitemoe support
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(solar): Update the solar-pro patch for latest llama.cpp bump
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat(llama.cpp): Bump to the latest master of llama.cpp
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(patches): Update all patches for latest bump
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat(llama): Always run sync.sh from the right directory
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama/patches): Update llama patches
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat(llama)!: Rough sync with llama.cpp submodule
There are a number of changes that will need to be propagated to llama.go
before any of this works!
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama/patches): Add a patch and update for missing ggml-impl.h include
This include is where the ggml_cgraph struct is defined. It is included in
many of the .c files to define the forward declartion in ggml.h. It seems
that with the subset of code included here, the import was somehow lost (or
out-of-order) when building, so adding this include to llama.cpp fixes the
missing definition.
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama/sync): Add missing ggml-cpu-impl.h copy-over in sync.sh
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Add missing log.cpp
This was added as part of the logging overhaul done in llama.cpp
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Overhaul use of sampling module for llama.cpp changes
The changes here reflect the changes made in the big llama.cpp sampling PR
https://github.com/ggerganov/llama.cpp/pull/9294
The sampling functionality is now broken into the base interface
(llama_sampler) and the generation implementation (gpt_sampler). The
changes here reflect that. Since the sampling.h/sampling.cpp code uses c++
STL headers, the sampling_ext.[h|cpp] wrapper is maintained to allow go to
access a pure-C interface.
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Fix the impl of SampleTokenGreedy for new sampling
I don't think this method is currently used, so it could probably just be
removed so that all sampling goes through the GPT interface, but in the
interest of doing no harm, this should keep the method working as expected.
Branch: IBMGraniteArchitectureSupport
* fix(llama): Remove unused SampleTokenGreedy
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(sync): Remove bash-specific change to sync.sh
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* chore(gofumpt): Format on llama.go to pass linting
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llm): Fix missing <thread> include in ext_server
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Remove TODO about grammar_first
This feature was not used/needed previously so should be fine without
plumbing it through now.
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Better naming for sampling wrapper and args
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Fix patch 05 to use new wrapper api and re-sync
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* runner: Flush pending responses before returning
If there are any pending reponses (such as from potential stop
tokens) then we should send them back before ending the sequence.
Otherwise, we can be missing tokens at the end of a response.
Fixes #6707
* fix(llama/sampling): Use gpt_sampler with a forward declaration
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llama): Remove unnecessary patch for gguf impl header
This was caused by an earlier mistake in the embeddings patch that was
dereferencing the pointer instead of using the wrapper API.
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(llm): Remove use of deprecated --log-disable flag
Branch: IBMGraniteArchitectureSupport
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
170 lines
5.7 KiB
C++
Vendored
170 lines
5.7 KiB
C++
Vendored
/**
|
|
* llama.cpp - commit 3f1ae2e32cde00c39b96be6d01c2997c29bae555 - do not edit this file
|
|
*
|
|
* MIT License
|
|
*
|
|
* Copyright (c) 2023-2024 The ggml authors
|
|
*
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
* in the Software without restriction, including without limitation the rights
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
* furnished to do so, subject to the following conditions:
|
|
*
|
|
* The above copyright notice and this permission notice shall be included in all
|
|
* copies or substantial portions of the Software.
|
|
*
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
* SOFTWARE.
|
|
*/
|
|
|
|
#pragma once
|
|
|
|
#include "llama-impl.h"
|
|
|
|
#include <map>
|
|
|
|
struct llama_vocab;
|
|
|
|
// grammar element type
|
|
enum llama_gretype {
|
|
// end of rule definition
|
|
LLAMA_GRETYPE_END = 0,
|
|
|
|
// start of alternate definition for rule
|
|
LLAMA_GRETYPE_ALT = 1,
|
|
|
|
// non-terminal element: reference to rule
|
|
LLAMA_GRETYPE_RULE_REF = 2,
|
|
|
|
// terminal element: character (code point)
|
|
LLAMA_GRETYPE_CHAR = 3,
|
|
|
|
// inverse char(s) ([^a], [^a-b] [^abc])
|
|
LLAMA_GRETYPE_CHAR_NOT = 4,
|
|
|
|
// modifies a preceding LLAMA_GRETYPE_CHAR or LLAMA_GRETYPE_CHAR_ALT to
|
|
// be an inclusive range ([a-z])
|
|
LLAMA_GRETYPE_CHAR_RNG_UPPER = 5,
|
|
|
|
// modifies a preceding LLAMA_GRETYPE_CHAR or
|
|
// LLAMA_GRETYPE_CHAR_RNG_UPPER to add an alternate char to match ([ab], [a-zA])
|
|
LLAMA_GRETYPE_CHAR_ALT = 6,
|
|
|
|
// any character (.)
|
|
LLAMA_GRETYPE_CHAR_ANY = 7,
|
|
};
|
|
|
|
typedef struct llama_grammar_element {
|
|
enum llama_gretype type;
|
|
uint32_t value; // Unicode code point or rule ID
|
|
} llama_grammar_element;
|
|
|
|
struct llama_partial_utf8 {
|
|
uint32_t value; // bit value so far (unshifted)
|
|
int n_remain; // num bytes remaining; -1 indicates invalid sequence
|
|
};
|
|
|
|
struct llama_grammar_candidate {
|
|
size_t index;
|
|
const uint32_t * code_points;
|
|
llama_partial_utf8 partial_utf8;
|
|
};
|
|
|
|
using llama_grammar_rule = std::vector< llama_grammar_element>;
|
|
using llama_grammar_stack = std::vector<const llama_grammar_element *>;
|
|
|
|
using llama_grammar_rules = std::vector<llama_grammar_rule>;
|
|
using llama_grammar_stacks = std::vector<llama_grammar_stack>;
|
|
using llama_grammar_candidates = std::vector<llama_grammar_candidate>;
|
|
|
|
const llama_grammar_rules & llama_grammar_get_rules (const struct llama_grammar * grammar);
|
|
llama_grammar_stacks & llama_grammar_get_stacks( struct llama_grammar * grammar);
|
|
|
|
// takes a set of possible pushdown stacks on a grammar, which are required to
|
|
// be positioned at a character range (see `llama_grammar_advance_stack`), and
|
|
// produces the N possible stacks if the given char is accepted at those
|
|
// positions
|
|
void llama_grammar_accept(
|
|
const llama_grammar_rules & rules,
|
|
const llama_grammar_stacks & stacks,
|
|
uint32_t chr,
|
|
llama_grammar_stacks & stacks_new);
|
|
|
|
std::vector<llama_grammar_candidate> llama_grammar_reject_candidates_for_stack(
|
|
const llama_grammar_rules & rules,
|
|
const llama_grammar_stack & stack,
|
|
const llama_grammar_candidates & candidates);
|
|
|
|
struct llama_grammar_parser {
|
|
std::map<std::string, uint32_t> symbol_ids;
|
|
|
|
llama_grammar_rules rules;
|
|
|
|
llama_grammar_stack c_rules() const;
|
|
|
|
uint32_t get_symbol_id(const char * src, size_t len);
|
|
uint32_t generate_symbol_id(const std::string & base_name);
|
|
|
|
void add_rule(uint32_t rule_id, const llama_grammar_rule & rule);
|
|
|
|
const char * parse_alternates(
|
|
const char * src,
|
|
const std::string & rule_name,
|
|
uint32_t rule_id,
|
|
bool is_nested);
|
|
|
|
const char * parse_sequence(
|
|
const char * src,
|
|
const std::string & rule_name,
|
|
llama_grammar_rule & rule,
|
|
bool is_nested);
|
|
|
|
const char * parse_rule(const char * src);
|
|
|
|
bool parse(const char * src);
|
|
void print(FILE * file);
|
|
};
|
|
|
|
struct llama_grammar {
|
|
// note: allow null vocab for testing (not great)
|
|
const llama_vocab * vocab;
|
|
|
|
const llama_grammar_rules rules; // TODO: shared ptr
|
|
llama_grammar_stacks stacks;
|
|
|
|
// buffer for partially generated UTF-8 sequence from accepted tokens
|
|
llama_partial_utf8 partial_utf8;
|
|
};
|
|
|
|
//
|
|
// internal API
|
|
//
|
|
|
|
// note: needed for tests (not great)
|
|
struct llama_grammar * llama_grammar_init_impl(
|
|
const struct llama_vocab * vocab,
|
|
const llama_grammar_element ** rules,
|
|
size_t n_rules,
|
|
size_t start_rule_index);
|
|
|
|
struct llama_grammar * llama_grammar_init_impl(const struct llama_vocab * vocab, const char * grammar_str, const char * grammar_root);
|
|
|
|
void llama_grammar_free_impl(struct llama_grammar * grammar);
|
|
|
|
struct llama_grammar * llama_grammar_clone_impl(const struct llama_grammar & grammar);
|
|
|
|
// TODO: move the API below as member functions of llama_grammar
|
|
void llama_grammar_apply_impl(
|
|
const struct llama_grammar & grammar,
|
|
llama_token_data_array * cur_p);
|
|
|
|
void llama_grammar_accept_impl(
|
|
struct llama_grammar & grammar,
|
|
llama_token token);
|