|
vor 1 Monat | |
---|---|---|
.. | ||
output | vor 1 Monat | |
README.md | vor 1 Monat | |
config.yaml | vor 1 Monat | |
llm.py | vor 1 Monat | |
pdf_report.py | vor 1 Monat | |
plots.py | vor 1 Monat | |
requirements.txt | vor 1 Monat | |
triage.py | vor 1 Monat | |
utils.py | vor 1 Monat | |
walkthrough.ipynb | vor 1 Monat |
This tool utilizes an off-the-shelf Llama model to analyze, generate insights, and create a report for better understanding of the state of a repository. It serves as a reference implementation for using Llama to develop custom reporting and data analytics applications.
The tool performs the following tasks:
For a step-by-step look, check out the walkthrough notebook.
pip install -r requirements.txt
model
section of config.yaml for using Llama via VLLM or Groq.guided_json
generation argument, while Groq requires passing the schema in the system prompt.python triage.py --repo_name='meta-llama/llama-recipes' --start_date='2024-08-14' --end_date='2024-08-27'
The tool generates:
annotations
, challenges
, and overview
data, which can be persisted in SQL tables for downstream analyses and reporting.The tool's configuration is stored in config.yaml. The following sections can be edited:
vllm
or groq
) and set the endpoints and API keys as applicable.parse_issue
: Parsing and generating annotations for the issuesassign_category
: Assigns each issue to a category specified in an enum in the corresponding JSON schemaget_overview
: Generates a high-level executive summary and analysis of all the parsed and generated data