Browse Source

add subfileds papers

mac 2 years ago
parent
commit
d3a35d938a

+ 10 - 0
README.md

@@ -20,6 +20,16 @@
 
 ## Milestone Papers
 
+If you're interested in the field of LLM, you may find the following list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link (it is possible that there are overlaps between different subfields):
+
+- [Chain-of-Thought](paper_list/chain_of_thougt.md)
+- [In-Context-Learning](paper_list/in_context_learning.md)
+- [RLHF](paper_list/RLHF.md)
+- [Prompt-Tuning](paper_list/prompt_tuning.md)
+- [MOE](paper_list/moe.md)
+- [Code-Pretraining](paper_list/code_pretraining.md)
+- [LLM-Evaluation](paper_list/protein_pretraining.md)
+
 |  Date  |       keywords       |    Institute    | Paper                                                                                                                                                                               | Publication |
 | :-----: | :------------------: | :--------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------: |
 | 2017-06 |     Transformers     |      Google      | [Attention Is All You Need](https://arxiv.org/pdf/1706.03762.pdf)                                                                                                                      |   NeurIPS   |

+ 0 - 0
paper_list/RLHF.md


+ 14 - 0
paper_list/chain_of_thougt.md

@@ -0,0 +1,14 @@
+# Chain-of-Thought
+
+> Chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning.
+
+## Papers
+
+- **Chain of Thought Prompting Elicits Reasoning in Large Language Models.** (2021-01), Jason Wei et al. [[pdf]](https://arxiv.org/abs/2201.11903)
+
+  > The first paper propose the idea of chain-of-thought
+
+## Useful Resources
+
+- [Chain-of-Thoughts Papers](https://github.com/Timothyxxx/Chain-of-ThoughtsPapers) - A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
+

+ 0 - 0
paper_list/code_pretraining.md


+ 0 - 0
paper_list/in_context_learning.md


+ 0 - 0
paper_list/moe.md


+ 0 - 0
paper_list/prompt_tuning.md


+ 0 - 0
paper_list/protein_pretraining.md