| .. |
|
benchmarks
|
816b25fc8f
Remove local tokenizer requirement for vllm on prem throughput benchmark
|
hai 1 ano |
|
code_llama
|
6d449a859b
New folder structure (#1)
|
%!s(int64=2) %!d(string=hai) anos |
|
evaluation
|
a2a2ffd78a
fix lm_eval.tasks' has no attribute 'initialize_tasks' error
|
%!s(int64=2) %!d(string=hai) anos |
|
finetuning
|
a695fd7f81
fix typos
|
hai 1 ano |
|
inference
|
93cd3b99d2
Updating links to running llama3 locally
|
hai 1 ano |
|
llama_api_providers
|
7c484c34cc
Bump gradio
|
hai 1 ano |
|
multilingual
|
e98f6de80d
typo
|
%!s(int64=2) %!d(string=hai) anos |
|
quickstart
|
23afbd481e
some typo fixes; codellama 70; tokens generated; colab link
|
%!s(int64=2) %!d(string=hai) anos |
|
responsible_ai
|
c1be7d802a
Updating responsible AI main readme
|
%!s(int64=2) %!d(string=hai) anos |
|
use_cases
|
93cd3b99d2
Updating links to running llama3 locally
|
hai 1 ano |
|
README.md
|
9ac7160498
Merge branch 'main' into tmoreau89/android
|
%!s(int64=2) %!d(string=hai) anos |