|
2 anni fa | |
---|---|---|
README.md | 2 anni fa |
🔥 Large Language Models(LLM) have taken the NLP community the Whole World by storm. Here is a comprehensive list of papers about large language models, especially relating to ChatGPT. It also contains codes, courses and related websites as shown below:
Is ChatGPT a General-Purpose Natural Language Processing Task Solver? Link
Is ChatGPT A Good Translator? A Preliminary Study Link
Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for DL Training and Inference. Visit us at deepspeed.ai or our Github repo.
Megatron-LM could be visited here. Megatron (1, 2, and 3) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision.
Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines. You can visit it here.
Mesh TensorFlow
(mtf)
is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of Mesh TensorFlow is to formalize and implement distribution strategies for your computation graph over your hardware/processors. For example: "Split the batch over rows of processors and split the units in the hidden layer across columns of processors." Mesh TensorFlow is implemented as a layer over TensorFlow. You can visite it here
[ICML 2022] Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models Link
[NeurIPS 2022] Foundational Robustness of Foundation Models Link
[Andrej Karpathy] Let's build GPT: from scratch, in code, spelled out. Video|Code
[DAIR.AI] Prompt Engineering Guide Link
[Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides
[Stanford] CS324-Large Language Models Homepage
[Stanford] CS25-Transformers United V2 Homepage
[李沐] HELM全面语言模型评测 Bilibili
Google
Meta
BigScience
This is an active repository and your contributions are always welcome!
I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.
If you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.