|
@@ -4,25 +4,25 @@
|
|
|
|
|
|
### 2021
|
|
|
|
|
|
-- **Cross-task generalization via natural language crowdsourcing instructions.** (2021-04) Swaroop Mishra et al. [paper](https://arxiv.org/abs/2104.08773)
|
|
|
-- **Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections** (2021-04) Ruiqi Zhong et al. [paper](https://aclanthology.org/2021.findings-emnlp.244/)
|
|
|
-- **Crossfit: A few-shot learning challenge for cross-task general- ization in NLP** (2021-04) QinYuan Ye et al. [paper](https://arxiv.org/abs/2104.08835)
|
|
|
+- (2021-04) **Cross-task generalization via natural language crowdsourcing instructions.** [paper](https://arxiv.org/abs/2104.08773)
|
|
|
+- (2021-04) **Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections** [paper](https://aclanthology.org/2021.findings-emnlp.244/)
|
|
|
+- (2021-04) **Crossfit: A few-shot learning challenge for cross-task general- ization in NLP** [paper](https://arxiv.org/abs/2104.08835)
|
|
|
|
|
|
-- **Finetuned language models are zero-shot learners** (2021-09) Jason Wei et al. [paper](https://openreview.net/forum?id=gEZrGCozdqR)
|
|
|
+- (2021-09) **Finetuned language models are zero-shot learners** [paper](https://openreview.net/forum?id=gEZrGCozdqR)
|
|
|
|
|
|
> FLAN
|
|
|
|
|
|
-- **Multitask prompted training enables zero-shot task generalization** (2021-10) Victor Sanh et al. [paper](https://openreview.net/forum?id=9Vrb9D0WI4)
|
|
|
+- (2021-10) **Multitask prompted training enables zero-shot task generalization** [paper](https://openreview.net/forum?id=9Vrb9D0WI4)
|
|
|
|
|
|
-- **MetaICL: Learning to learn in context** (2021-10) Sewon Min et al. [paper](https://arxiv.org/abs/2110.15943#:~:text=We%20introduce%20MetaICL%20%28Meta-training%20for%20In-Context%20Learning%29%2C%20a,learning%20on%20a%20large%20set%20of%20training%20tasks.)
|
|
|
+- (2021-10) **MetaICL: Learning to learn in context** [paper](https://arxiv.org/abs/2110.15943)
|
|
|
|
|
|
### 2022
|
|
|
|
|
|
-- **Training language models to follow instructions with human feedback.** (2022-03) Long Ouyang et al. [paper](https://arxiv.org/abs/2203.02155)
|
|
|
+- (2022-03) **Training language models to follow instructions with human feedback.** [paper](https://arxiv.org/abs/2203.02155)
|
|
|
|
|
|
-- **Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks** (2022-04) Yizhong Wang et al. [paper](https://arxiv.org/abs/2204.07705)
|
|
|
+- (2022-04) **Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks** [paper](https://arxiv.org/abs/2204.07705)
|
|
|
|
|
|
-- **Scaling Instruction-Finetuned Language Models** (20220-10) Hyung Won Chung et al. [paper](https://arxiv.org/pdf/2210.11416.pdf)
|
|
|
+- (20220-10) **Scaling Instruction-Finetuned Language Models** [paper](https://arxiv.org/pdf/2210.11416.pdf)
|
|
|
|
|
|
> Flan-T5/PaLM
|
|
|
|