evelynmitchell
37da8e863a
Update README.md functionary demo typo ( #996 )
...
missing comma
2023-12-16 19:00:30 -05:00
zocainViken
6bbeea07ae
README.md multimodal params fix ( #967 )
...
multi modal params fix: add logits = True -> to make llava work
2023-12-11 20:41:38 -05:00
Aniket Maurya
c1d92ce680
fix minor typo ( #958 )
...
* fix minor typo
* Fix typo
---------
Co-authored-by: Andrei <abetlen@gmail.com>
2023-12-11 20:40:38 -05:00
Andrei Betlen
fb32f9d438
docs: Update README
2023-11-28 03:15:01 -05:00
Andrei Betlen
43e006a291
docs: Remove divider
2023-11-28 02:41:50 -05:00
Andrei Betlen
2cc6c9ae2f
docs: Update README, add FAQ
2023-11-28 02:37:34 -05:00
Andrei Betlen
9c68b1804a
docs: Add api reference links in README
2023-11-27 18:54:07 -05:00
Andrei Betlen
41428244f0
docs: Fix README indentation
2023-11-27 18:29:13 -05:00
Andrei Betlen
1539146a5e
docs: Fix README docs link
2023-11-27 18:21:00 -05:00
Anton Vice
aa5a7a1880
Update README.md ( #940 )
...
.ccp >> .cpp
2023-11-26 15:39:38 -05:00
Andrei Betlen
abb1976ad7
docs: Add n_ctx not for multimodal models
2023-11-22 21:07:00 -05:00
Andrei Betlen
36679a58ef
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-22 19:49:59 -05:00
Andrei Betlen
bd43fb2bfe
docs: Update high-level python api examples in README to include chat formats, function calling, and multi-modal models.
2023-11-22 19:49:56 -05:00
Andrei Betlen
d977b44d82
docs: Add links to server functionality
2023-11-22 18:21:02 -05:00
Andrei Betlen
aa815d580c
docs: Link to langchain docs
2023-11-22 18:17:49 -05:00
Andrei Betlen
602ea64ddd
docs: Fix whitespace
2023-11-22 18:09:31 -05:00
Andrei Betlen
f336eebb2f
docs: fix 404 to macos installation guide. Closes #861
2023-11-22 18:07:30 -05:00
Andrei Betlen
1ff2c92720
docs: minor indentation fix
2023-11-22 18:04:18 -05:00
Andrei Betlen
68238b7883
docs: setting n_gqa is no longer required
2023-11-22 18:01:54 -05:00
Andrei Betlen
198178225c
docs: Remove stale warning
2023-11-22 17:59:16 -05:00
Juraj Bednar
5a9770a56b
Improve documentation for server chat formats ( #934 )
2023-11-22 06:10:03 -05:00
James Braza
23a221999f
Documenting server usage ( #768 )
2023-11-21 00:24:22 -05:00
Sujeendran Menon
7b136bb5b1
Fix for shared library not found and compile issues in Windows ( #848 )
...
* fix windows library dll name issue
* Updated README.md Windows instructions
* Update llama_cpp.py to handle different windows dll file versions
2023-11-01 18:55:57 -04:00
Jason Cox
40b22909dc
Update examples from ggml to gguf and add hw-accel note for Web Server ( #688 )
...
* Examples from ggml to gguf
* Use gguf file extension
Update examples to use filenames with gguf extension (e.g. llama-model.gguf).
---------
Co-authored-by: Andrei <abetlen@gmail.com>
2023-09-14 14:48:21 -04:00
Andrei Betlen
f4090a0bb2
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
2023-09-13 23:00:43 -04:00
Andrei Betlen
8ddf63b9c7
Remove reference to FORCE_CMAKE from docs
2023-09-12 23:56:10 -04:00
Andrei Betlen
bcef9ab2d9
Update title
2023-09-12 19:02:30 -04:00
Andrei Betlen
89ae347585
Remove references to force_cmake
2023-09-12 19:02:20 -04:00
Andrei Betlen
1dd3f473c0
Remove references to FORCE_CMAKE
2023-09-12 19:01:16 -04:00
Andrei Betlen
1910793f56
Merge branch 'main' into v0.2-wip
2023-09-12 16:43:32 -04:00
Juarez Bochi
20ac434d0f
Fix low level api examples
2023-09-07 17:50:47 -04:00
Andrei Betlen
895f84f8fa
Add ROCm / AMD instructions to docs
2023-08-25 17:19:23 -04:00
Andrei Betlen
ac47d55577
Merge branch 'main' into v0.2-wip
2023-08-25 15:45:22 -04:00
Andrei
915bbeacc5
Merge pull request #633 from abetlen/gguf
...
GGUF (Breaking Change to Model Files)
2023-08-25 15:13:12 -04:00
Andrei Betlen
ac37ea562b
Add temporary docs for GGUF model conversion
2023-08-25 15:11:08 -04:00
Andrei Betlen
80389f71da
Update README
2023-08-25 05:02:48 -04:00
Andrei Betlen
cf405f6764
Merge branch 'main' into v0.2-wip
2023-08-24 00:30:51 -04:00
Andrei
2a0844b190
Merge pull request #556 from Huge/patch-1
...
Fix dev setup in README.md so that everyone can run it
2023-08-17 23:21:45 -04:00
Mike Zeng
097fba25e5
Fixed spelling error "lowe-level API" to "low-level API"
2023-08-05 02:00:04 -05:00
Huge
60e85cbe46
Fix dev setup in README.md so that everyone can run it
2023-08-02 12:27:08 +02:00
Ihsan Soydemir
0687a3092b
Fix typo in 70B path
2023-07-25 20:49:44 +02:00
Andrei Betlen
343480364f
Merge branch 'main' into v0.2-wip
2023-07-24 15:26:08 -04:00
bretello
0f09f10e8c
add support for llama2 70b
2023-07-24 19:38:24 +02:00
Andrei Betlen
0538ba1dab
Merge branch 'main' into v0.2-wip
2023-07-20 19:06:26 -04:00
Andrew Duffy
b6b2071180
Update install instructions for Linux OpenBLAS
...
The instructions are different than they used to be.
Source: https://github.com/ggerganov/llama.cpp#openblas
2023-07-18 22:22:33 -04:00
Andrei Betlen
57db1f9570
Update development docs for scikit-build-core. Closes #490
2023-07-18 20:26:25 -04:00
Andrei
5eab1db0d0
Merge branch 'main' into v0.2-wip
2023-07-18 18:54:27 -04:00
Andrei Betlen
6cb77a20c6
Migrate to scikit-build-core. Closes #489
2023-07-18 18:52:29 -04:00
Carlos Tejada
b24b10effd
Added info to set ENV variables in PowerShell
...
- Added an example on how to set the variables `CMAKE_ARGS`
and `FORCE_CMAKE`.
- Added a subtitle for the `Windows remarks` and `MacOS` remarks.
2023-07-18 17:14:42 -04:00
Andrei
15e0e0a937
Merge pull request #390 from SubhranshuSharma/main
...
added termux with root instructions
2023-07-14 16:53:23 -04:00