README.md 19 KB

Awesome-LLM Awesome

🔥 Large Language Models(LLM) have taken the NLP community the Whole World by storm. Here is a comprehensive list of papers about large language models, especially relating to ChatGPT. It also contains codes, courses and related websites as shown below:

Milestone Papers

Year
keywords Institute Paper Publication
2017-06 Transformers Google Attention Is All You Need NeurIPS
2018-06 GPT 1.0 OpenAI Improving Language Understanding by Generative Pre-Training
2018-10 BERT Google BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding NAACL
2019-02 GPT 2.0 OpenAI Language Models are Unsupervised Multitask Learners
2019-09 Megatron-LM NVIDIA Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
2019-10 T5 Google Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer JMLR
2020-01 Scaling Law OpenAI Scaling Laws for Neural Language Models
2020-05 GPT 3.0 OpenAI Language models are few-shot learners NeurIPS
2020-12 LM-BFF Princeton Making Pre-trained Language Models Better Few-shot Learners ACL
2021-08 Codex OpenAI Evaluating Large Language Models Trained on Code
2021-08 Foundation Models Stanford On the Opportunities and Risks of Foundation Models
2021-09 FLAN Google Finetuned Language Models are Zero-Shot Learners ICLR
2021-12 WebGPT OpenAI WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing
2021-12 Retro DeepMind Improving language models by retrieving from trillions of tokens ICML
2022-01 COT Google Chain-of-Thought Prompting Elicits Reasoning in Large Language Models NeurIPS
2022-01 LaMDA Google LaMDA: Language Models for Dialog Applications
2022-03 InstructGPT OpenAI Training language models to follow instructions with human feedback
2022-04 PaLM Google PaLM: Scaling Language Modeling with Pathways
2022-04 Chinchilla DeepMind An empirical analysis of compute-optimal large language model training NeurIPS
2022-05 OPT Meta OPT: Open Pre-trained Transformer Language Models
2022-06 Emergent Abilities Google Emergent Abilities of Large Language Models TMLR
2022-06 BIG-bench Google Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
2022-06 METALM Microsoft Language Models are General-Purpose Interfaces
2022-09 Sparrow DeepMind Improving alignment of dialogue agents via targeted human judgements
2022-10 Flan-T5 Google Scaling Instruction-Finetuned Language Models
2022-10 GLM-130B Tsinghua GLM-130B: An Open Bilingual Pre-trained Model ICLR
2022-11 HELM Stanford Holistic Evaluation of Language Models
2022-11 BLOOM BigScience BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
2022-11 Galactica Meta Galactica: A Large Language Model for Science
2023-01 Flan 2022 Collection Google The Flan Collection: Designing Data and Methods for Effective Instruction Tuning

ChatGPT Evaluation

  • Is ChatGPT a General-Purpose Natural Language Processing Task Solver? Link

  • Is ChatGPT A Good Translator? A Preliminary Study Link

Tools for Training LLM

Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.

DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for DL Training and Inference. Visit us at deepspeed.ai or our Github repo.

Megatron-LM could be visited here. Megatron (1, 2, and 3) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision.

Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines. You can visit it here.

Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of Mesh TensorFlow is to formalize and implement distribution strategies for your computation graph over your hardware/processors. For example: "Split the batch over rows of processors and split the units in the hidden layer across columns of processors." Mesh TensorFlow is implemented as a layer over TensorFlow. You can visite it here

This tutorial discusses parallelism via jax.Array.

Tutorials about LLM

  • [ICML 2022] Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models Link

  • [NeurIPS 2022] Foundational Robustness of Foundation Models Link

  • [Andrej Karpathy] Let's build GPT: from scratch, in code, spelled out. Video|Code

  • [DAIR.AI] Prompt Engineering Guide Link

    Course about LLM

  • [Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides

  • [Stanford] CS324-Large Language Models Homepage

  • [Stanford] CS25-Transformers United V2 Homepage

  • [李沐] InstructGPT论文精读 Bilibili Youtube

  • [李沐] HELM全面语言模型评测 Bilibili

  • [李沐] GPT,GPT-2,GPT-3 论文精读 Bilibili Youtube

  • [Aston Zhang] Chain of Thought论文 Bilibili Youtube

Useful Resources

  • [2023-02-16][知乎][旷视科技]对话旷视研究院张祥雨|ChatGPT的科研价值可能更大 Link
  • [2023-02-15][知乎][张家俊]关于ChatGPT八个技术问题的猜想 Link
  • [2023-02-14][Stephen Wolfram]What Is ChatGPT Doing … and Why Does It Work? Link
  • [2023-02-13][知乎][熊德意] 对ChatGPT的二十点看法 Link
  • [2023-02-12][Jingfeng Yang] Why did all of the public reproduction of GPT-3 fail?Link
  • [2023-02-11][知乎][刘聪NLP] ChatGPT-所见、所闻、所感 Link
  • [2023-02-07][Forbes] The Next Generation Of Large Language Models Link
  • [2023-01-26][NVIDIA] What Are Large Language Models Used For? Link
  • [2023-01-18][知乎][张俊林] 通向AGI之路:大型语言模型(LLM)技术精要 Link
  • [2023-01-06][Shayne Longpre] Major LLMs + Data Availability Link
  • [2022-12-11][Yao Fu] How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources Link
  • [2022-12-07][Hung-yi Lee] ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程 Link
  • [2021-10-26][Huggingface] Large Language Models: A New Moore's Law Link

Publicly Available LLM APIs

Publicly Available LLM Checkpoints

BigScience/BLOOM

Size Parameters Link
560 M 560 M Huggingface
1.1 B 1.1 B Huggingface
1.7 B 1.7 B Huggingface
3 B 3 B Huggingface
7.1 B 7.1 B Huggingface
176 B 176 B Huggingface

BigScience/T0

Size Parameters Link
3 B 3 B Huggingface
11 B 11 B Huggingface

Blink/RWKV

| Size | Parameters | Link | | ----- | ---------- | --------------------------------------------------- | | 169 M | 169 M | Huggingface | | 430 M | 430 M | Huggingface | | 1.5 B | 1.5 B | Huggingface | | 3 B | 3 B | Huggingface | | 7 B | 7 B | Huggingface |

Google/Flan-T5

| Size | Parameters | Link |
| ----- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | | small | 80 M | Huggingface | Original | | base | 250 M | Huggingface | Original | | large | 780 M | Huggingface | Original | | xl | 3 B | Huggingface | Original | | xxl | 11 B | Huggingface | Original |

Meta/OPT

Size Parameters Link
125 M 125 M Huggingface
350 M 350 M Huggingface
1.3 B 1.3 B Huggingface
2.7 B 2.7 B Huggingface
6.7 B 6.7 B Huggingface
13 B 13 B Huggingface
30 B 30 B Huggingface
66 B 66 B Huggingface

Meta/Galactica

Size Parameters Link
mini 125 M Huggingface
base 1.3 B Huggingface
standard 6.7 B Huggingface
large 30 B Huggingface
huge 120 B Huggingface

EleutherAI/GPT-NeoX

Size Parameters Link
20 B 20 B Huggingface|Original

Tsinghua/GLM

Size Parameters Link
GLM-Base 110M Original
GLM-Large 335M Original
GLM-Large-Chinese 335M Original
GLM-Doc 335M Original
GLM-410M 410M Original
GLM-515M 515M Original
GLM-RoBERTa 335M Original
GLM-2B 2B Original
GLM-10B 10B Original
GLM-10B-Chinese 10B Original
GLM-130B 130B Original

Contributing

This is an active repository and your contributions are always welcome!

I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.


If you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.