|
2 rokov pred | |
---|---|---|
paper_list | 2 rokov pred | |
resources | 2 rokov pred | |
.gitignore | 2 rokov pred | |
LICENSE.md | 2 rokov pred | |
README.md | 2 rokov pred | |
contributing.md | 2 rokov pred |
🔥 Large Language Models(LLM) have taken the NLP community the Whole World by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs:
If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link (it is possible that there are overlaps between different subfields):
:exclamation: We would greatly appreciate and welcome your contribution to the following list.
There are three important steps for a ChatGPT-like LLM:
We want to make an apple-to-apple comparison here:
Model | Size | Training Corpus | Architecture | Access | Date | Origin |
---|---|---|---|---|---|---|
Switch Transformer | 1.6T | multilingual | Decoder(MOE) | - | 2021-01 | Paper |
GLaM | 1.2T | English | Decoder(MOE) | - | 2021-12 | Paper |
PaLM | 540B | multilingual,code | Decoder | - | 2022-04 | Paper |
MT-NLG | 530B | English | Decoder | - | 2022-01 | Paper |
J1-Jumbo | 178B | English | Decoder | api | 2021-08 | Paper |
OPT | 175B | primarily English | Decoder | api | ckpt | 2022-05 | Paper |
BLOOM | 176B | multilingual,code | Decoder | api | ckpt | 2022-11 | Paper |
GPT 3.0 | 175B | primarily English | Decoder | api | 2020-05 | Paper |
LaMDA | 137B | Dialogue | Decoder | - | 2022-01 | Paper |
GLM | 130B | English,Chinese | Decoder | ckpt | 2022-10 | Paper |
YaLM | 100B | English,Russian | Decoder | ckpt | 2022-06 | Blog |
LLaMA | 65B | Mixed | Decoder | ckpt | 2022-09 | Paper |
GPT-NeoX | 20B | English | Decoder | ckpt | 2022-04 | Paper |
UL2 | 20B | English | agnostic | ckpt | 2022-05 | Paper |
鹏程.盘古α | 13B | Chinese | Decoder | ckpt | 2021-04 | Paper |
T5 | 11B | English | Encoder-Decoder | ckpt | 2019-10 | Paper |
CPM-Bee | 10B | English,Chinese | Decoder | api | 2022-10 | Paper |
rwkv-4 | 7B | English | RWKV | ckpt | 2022-09 | Github |
GPT-J | 6B | English | Decoder | ckpt | 2022-09 | Github |
GPT-Neo | 2.7B | English | Decoder | ckpt | 2021-03 | Github |
GPT-Neo | 1.3B | English | Decoder | ckpt | 2021-03 | Github |
Model | Size | Training Corpus | Architecture | Access | Date | Origin |
---|---|---|---|---|---|---|
Flan-PaLM | 540B | English | Decoder | - | 2022-10 | Paper |
BLOOMZ | 176B | multilingual,code | Decoder | ckpt | 2022-11 | Paper |
InstructGPT | 175B | Enligsh | Decoder | api | 2022-03 | Paper |
Galactica | 120B | English,code,Latex,DNA,etc. | Decoder | ckpt | 2022-11 | Paper |
Flan-UL2 | 20B | - | Decoder | ckpt | 2023-03 | Blog |
Gopher | - | - | - | - | - | - |
Chinchilla | - | - | - | - | - | - |
Flan-T5 | 11B | English | Encoder-Decoder | ckpt | 2022-10 | Paper |
T0 | 11B | English | Encoder-Decoder | ckpt | 2021-10 | Paper |
Model | Size | Training Corpus | Architecture | Access | Date | Origin |
---|---|---|---|---|---|---|
ChatGPT | - | - | Decoder | demo|api | 2022-11 | Blog |
Sparrow | 70B | - | - | - | 2022-09 | Paper |
Claude | - | - | - | - | - | - |
Serving OPT-175B, BLOOM-176B and CodeGen-16B using Alpa
Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for DL Training and Inference. Visit us at deepspeed.ai or our Github repo.
Megatron-LM could be visited here. Megatron (1, 2, and 3) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision.
Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.
BMTrain is an efficient large model training toolkit that can be used to train large models with tens of billions of parameters. It can train models in a distributed manner while keeping the code as simple as stand-alone training.
Mesh TensorFlow
(mtf)
is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of Mesh TensorFlow is to formalize and implement distribution strategies for your computation graph over your hardware/processors. For example: "Split the batch over rows of processors and split the units in the hidden layer across columns of processors." Mesh TensorFlow is implemented as a layer over TensorFlow.
This tutorial discusses parallelism via jax.Array.
🦜️🔗 LangChain
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include ❓ Question Answering over specific documents, 💬 Chatbots and 🤖 Agents.
Use ChatGPT On Wechat via wechaty
This is an active repository and your contributions are always welcome!
I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.
If you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.