Browse Source

Update README.md

Elvis Saravia 2 năm trước cách đây
mục cha
commit
86a6b57fb7
1 tập tin đã thay đổi với 3 bổ sung3 xóa
  1. 3 3
      README.md

+ 3 - 3
README.md

@@ -42,9 +42,9 @@ At DAIR.AI we ❤️ reading ML papers so we've created this repo to highlight t
 | **Paper**  | **Links** |
 | ------------- | ------------- |
 | 1) **A Survey on Evaluation of LLMs** - a comprehensive overview of evaluation methods for LLMs focusing on what to evaluate, where to evaluate, and how to evaluate. | [Paper](https://arxiv.org/abs/2307.03109), [Tweet](https://twitter.com/omarsar0/status/1677137934946803712?s=20) |
-| 2) **How Language Models Use Long Contexts ** - finds that LM performance is often highest when relevant information occurs at the beginning or end of the input context; performance degrades when relevant information is provided in the middle of a long context.  | [Paper](https://arxiv.org/abs/2307.03172), [Tweet](https://twitter.com/nelsonfliu/status/1677373731948339202?s=20) |
-| 3) **LLMs as Effective Text Rankers ** - proposes a prompting technique that enables open-source LLMs to perform state-of-the-art text ranking on standard benchmarks. | [Paper](https://arxiv.org/abs/2306.17563), [Tweet](https://twitter.com/arankomatsuzaki/status/1675673784454447107?s=20) |
-| 4) **Multimodal Generation with Frozen LLMs ** - introduces an approach that effectively maps images to the token space of LLMs; enables models like PaLM and GPT-4 to tackle visual tasks without parameter updates; enables multimodal tasks and uses in-context learning to tackle various visual tasks. | [Paper](https://arxiv.org/abs/2306.17842), [Tweet](https://twitter.com/roadjiang/status/1676375112914989056?s=20) |
+| 2) **How Language Models Use Long Contexts** - finds that LM performance is often highest when relevant information occurs at the beginning or end of the input context; performance degrades when relevant information is provided in the middle of a long context.  | [Paper](https://arxiv.org/abs/2307.03172), [Tweet](https://twitter.com/nelsonfliu/status/1677373731948339202?s=20) |
+| 3) **LLMs as Effective Text Rankers** - proposes a prompting technique that enables open-source LLMs to perform state-of-the-art text ranking on standard benchmarks. | [Paper](https://arxiv.org/abs/2306.17563), [Tweet](https://twitter.com/arankomatsuzaki/status/1675673784454447107?s=20) |
+| 4) **Multimodal Generation with Frozen LLMs** - introduces an approach that effectively maps images to the token space of LLMs; enables models like PaLM and GPT-4 to tackle visual tasks without parameter updates; enables multimodal tasks and uses in-context learning to tackle various visual tasks. | [Paper](https://arxiv.org/abs/2306.17842), [Tweet](https://twitter.com/roadjiang/status/1676375112914989056?s=20) |
 | 5. **CodeGen2.5** - releases a new code LLM trained on 1.5T tokens; the 7B model is on par with >15B code-generation models and it’s optimized for fast sampling. | [Paper](https://arxiv.org/abs/2305.02309), [Tweet](https://twitter.com/erik_nijkamp/status/1677055271104045056?s=20) |
 | 6) **Elastic Decision Transformer** - introduces an advancement over Decision Transformers and variants by facilitating trajectory stitching during action inference at test time, achieved by adjusting to shorter history that allows transitions to diverse and better future states. | [Paper](https://arxiv.org/abs/2307.02484),  [Tweet](https://twitter.com/xiaolonw/status/1677003542249484289?s=20)  |
 | 7) **Robots That Ask for Help** -  presents a framework to measure and align the uncertainty of LLM-based planners that ask for help when needed.  | [Paper](https://arxiv.org/abs/2307.01928),  [Tweet](https://twitter.com/allenzren/status/1677000811803443213?s=20)  |