|
2 týždňov pred | |
---|---|---|
.. | ||
README.md | 2 týždňov pred | |
requirements.txt | 3 týždňov pred | |
research_analyzer.py | 2 týždňov pred |
This leverages Llama 4 Maverick model to retrieve the references of an arXiv paper and ingest all their content for question-answering without using any RAG to store these information.
Model | Meta Llama4 Maverick | Meta Llama4 Scout | OpenAI GPT-4.5 | Claude Sonnet 3.7 |
---|---|---|---|---|
Context Window | 1M tokens | 10M tokens | 128K tokens | 1K tokens |
Because of the long context length, the analyzer can process all the reference paper content at once, so you can ask questions about the paper without worrying about the context length.
pip install -r requirements.txt
python research_analyzer.py
Open the gradio interface on localhost in the browser.
Provide a paper url such as https://arxiv.org/abs/2305.11135
Press "Ingest", wait for paper to be processed and ask questions about it