|
3 semanas atrás | |
---|---|---|
paper_list | 1 ano atrás | |
resources | 2 anos atrás | |
.gitignore | 2 anos atrás | |
LICENSE.md | 2 anos atrás | |
README.md | 3 semanas atrás | |
contributing.md | 1 ano atrás |
🔥 Large Language Models(LLM) have taken the NLP community AI community the Whole World by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs.
[!NOTE] If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link:
other papers
Natrural-Instruction
(ACL 2022), FLAN
(ICLR 2022) and T0
(ICLR 2022).Awesome-Chinese-LLM - 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
LLM4Opt - Applying Large language models (LLMs) for diverse optimization tasks (Opt) is an emerging research area. This is a collection of references and papers of LLM4Opt.
awesome-language-model-analysis - This paper list focuses on the theoretical or empirical analysis of language models, e.g., the learning dynamics, expressive capacity, interpretability, generalization, and other interesting topics.
AlpacaEval - An Automatic Evaluator for Instruction-following Language Models using Nous benchmark suite.
other leaderboards
ACLUE - an evaluation benchmark focused on ancient Chinese language comprehension.
BeHonest - A pioneering benchmark specifically designed to assess honesty in LLMs comprehensively.
Berkeley Function-Calling Leaderboard - evaluates LLM's ability to call external functions/tools.
Chinese Large Model Leaderboard - an expert-driven benchmark for Chineses LLMs.
CompassRank - CompassRank is dedicated to exploring the most advanced language and visual models, offering a comprehensive, objective, and neutral evaluation reference for the industry and research.
CompMix - a benchmark evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes).
DreamBench++ - a benchmark for evaluating the performance of large language models (LLMs) in various tasks related to both textual and visual imagination.
FELM - a meta-benchmark that evaluates how well factuality evaluators assess the outputs of large language models (LLMs).
InfiBench - a benchmark designed to evaluate large language models (LLMs) specifically in their ability to answer real-world coding-related questions.
LawBench - a benchmark designed to evaluate large language models in the legal domain.
LLMEval - focuses on understanding how these models perform in various scenarios and analyzing results from an interpretability perspective.
M3CoT - a benchmark that evaluates large language models on a variety of multimodal reasoning tasks, including language, natural and social sciences, physical and social commonsense, temporal reasoning, algebra, and geometry.
MathEval - a comprehensive benchmarking platform designed to evaluate large models' mathematical abilities across 20 fields and nearly 30,000 math problems.
MixEval - a ground-truth-based dynamic benchmark derived from off-the-shelf benchmark mixtures, which evaluates LLMs with a highly capable model ranking (i.e., 0.96 correlation with Chatbot Arena) while running locally and quickly (6% the time and cost of running MMLU).
MMedBench - a benchmark that evaluates large language models' ability to answer medical questions across multiple languages.
MMToM-QA - a multimodal question-answering benchmark designed to evaluate AI models' cognitive ability to understand human beliefs and goals.
OlympicArena - a benchmark for evaluating AI models across multiple academic disciplines like math, physics, chemistry, biology, and more.
PubMedQA - a biomedical question-answering benchmark designed for answering research-related questions using PubMed abstracts.
SciBench - benchmark designed to evaluate large language models (LLMs) on solving complex, college-level scientific problems from domains like chemistry, physics, and mathematics.
SuperBench - a benchmark platform designed for evaluating large language models (LLMs) on a range of tasks, particularly focusing on their performance in different aspects such as natural language understanding, reasoning, and generalization.
SuperLim - a Swedish language understanding benchmark that evaluates natural language processing (NLP) models on various tasks such as argumentation analysis, semantic similarity, and textual entailment.
TAT-DQA - a large-scale Document Visual Question Answering (VQA) dataset designed for complex document understanding, particularly in financial reports.
TAT-QA - a large-scale question-answering benchmark focused on real-world financial data, integrating both tabular and textual information.
VisualWebArena - a benchmark designed to assess the performance of multimodal web agents on realistic visually grounded tasks.
We-Math - a benchmark that evaluates large multimodal models (LMMs) on their ability to perform human-like mathematical reasoning.
WHOOPS! - a benchmark dataset testing AI's ability to reason about visual commonsense through images that defy normal expectations.
DeepSeek
Meta
Mistral AI
01-ai
Baichuan
Nvidia
BLOOM
ElutherAI
Stability AI
Reference: LLMDataHub
- IBM data-prep-kit - Open-Source Toolkit for Efficient Unstructured Data Processing with Pre-built Modules and Local to Cluster Scalability.
- Datatrove - Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
other evaluation frameworks
other frameworks
Reference: llm-inference-solutions
- SGLang - SGLang is a fast serving framework for large language models and vision language models.
- vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs.
- llama.cpp - LLM inference in C/C++.
- ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
- TGI - a toolkit for deploying and serving Large Language Models (LLMs).
- TensorRT-LLM - Nvidia Framework for LLM Inference
other deployment tools
- FasterTransformer - NVIDIA Framework for LLM Inference(Transitioned to TensorRT-LLM)
- MInference - To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up to 10x for pre-filling on an A100 while maintaining accuracy.
- exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
- FastChat - A distributed multi-model LLM serving system with web UI and OpenAI-compatible RESTful APIs.
- mistral.rs - Blazingly fast LLM inference.
- SkyPilot - Run LLMs and batch jobs on any cloud. Get maximum cost savings, highest GPU availability, and managed execution -- all with a simple interface.
- Haystack - an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data.
- OpenLLM - Fine-tune, serve, deploy, and monitor any open-source LLMs in production. Used in production at BentoML for LLMs-based applications.
- DeepSpeed-Mii - MII makes low-latency and high-throughput inference, similar to vLLM powered by DeepSpeed.
- Text-Embeddings-Inference - Inference for text-embeddings in Rust, HFOIL Licence.
- Infinity - Inference for text-embeddings in Python
- LMDeploy - A high-throughput and low-latency inference and serving framework for LLMs and VLs
- Liger-Kernel - Efficient Triton Kernels for LLM Training.
Reference: awesome-llm-apps
- dspy - DSPy: The framework for programming—not prompting—foundation models.
- LangChain — A popular Python/JavaScript library for chaining sequences of language model prompts.
- LlamaIndex — A Python library for augmenting LLM apps with data.
more applications
OpenAI Evals — An open-source library for evaluating task performance of language models and prompts.
Arthur Shield — A paid product for detecting toxicity, hallucination, prompt injection, etc.
LMQL — A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.
ModelFusion - A TypeScript library for building apps with LLMs and other ML models (speech-to-text, text-to-speech, image generation).
OneKE — A bilingual Chinese-English knowledge extraction model with knowledge graphs and natural language processing technologies.
llm-ui - A React library for building LLM UIs.
Wordware - A web-hosted IDE where non-technical domain experts work with AI Engineers to build task-specific AI agents. We approach prompting as a new programming language rather than low/no-code blocks.
Wallaroo.AI - Deploy, manage, optimize any model at scale across any environment from cloud to edge. Let's you go from python notebook to inferencing in minutes.
Dify - An open-source LLM app development platform with an intuitive interface that streamlines AI workflows, model management, and production deployment.
LazyLLM - An open-source LLM app for building multi-agent LLMs applications in an easy and lazy way, supports model deployment and fine-tuning.
MemFree - Open Source Hybrid AI Search Engine, Instantly Get Accurate Answers from the Internet, Bookmarks, Notes, and Docs. Support One-Click Deployment
AutoRAG - Open source AutoML tool for RAG. Optimize the RAG answer quality automatically. From generation evaluation datset to deploying optimized RAG pipeline.
Epsilla - An all-in-one LLM Agent platform with your private data and knowledge, delivers your production-ready AI Agents on Day 1.
Arize-Phoenix - Open-source tool for ML observability that runs in your notebook environment. Monitor and fine tune LLM, CV and Tabular Models.
This is an active repository and your contributions are always welcome!
I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.
If you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.
[^1]: This is not legal advice. Please contact the original authors of the models for more information.