|
|
1 سال پیش | |
|---|---|---|
| .. | ||
| output | 1 سال پیش | |
| README.md | 1 سال پیش | |
| config.yaml | 1 سال پیش | |
| llm.py | 1 سال پیش | |
| pdf_report.py | 1 سال پیش | |
| plots.py | 1 سال پیش | |
| requirements.txt | 1 سال پیش | |
| triage.py | 1 سال پیش | |
| utils.py | 1 سال پیش | |
| walkthrough.ipynb | 1 سال پیش | |
This tool utilizes an off-the-shelf Llama model to analyze, generate insights, and create a report for better understanding of the state of a repository. It serves as a reference implementation for using Llama to develop custom reporting and data analytics applications.
The tool performs the following tasks:
For a step-by-step look, check out the walkthrough notebook.
pip install -r requirements.txt
model section of config.yaml for using Llama via VLLM or Groq.guided_json generation argument, while Groq requires passing the schema in the system prompt.python triage.py --repo_name='meta-llama/llama-recipes' --start_date='2024-08-14' --end_date='2024-08-27'
The tool generates:
annotations, challenges, and overview data, which can be persisted in SQL tables for downstream analyses and reporting.The tool's configuration is stored in config.yaml. The following sections can be edited:
vllm or groq) and set the endpoints and API keys as applicable.parse_issue: Parsing and generating annotations for the issuesassign_category: Assigns each issue to a category specified in an enum in the corresponding JSON schemaget_overview: Generates a high-level executive summary and analysis of all the parsed and generated data