|
@@ -18,7 +18,7 @@ We ❤️ reading ML papers so we've created this repo to highlight the top ML p
|
|
|
| 2) **Composer: Creative and Controllable Image Synthesis with Composable Conditions** - Composer - a 5B parameter creative and controllable diffusion model trained on billions (text, image) pairs. | [Paper](https://arxiv.org/abs/2302.09778), [Project](https://damo-vilab.github.io/composer-page/) , [Github](https://github.com/damo-vilab/composer) , [Tweet](https://twitter.com/dair_ai/status/1629845537913548802?s=20) |
|
|
|
| 3) **The Wisdom of Hindsight Makes Language Models Better Instruction Followers** - Hindsight Instruction Relabeling - an alternative algorithm to train LLMs from feedback; the feedback is converted to instruction by relabeling the original one and training the model, in a supervised way, for better alignment. | [Paper](https://arxiv.org/abs/2302.05206), [Github](https://github.com/tianjunz/HIR) [Tweet](https://twitter.com/dair_ai/status/1629845539964481537?s=20) |
|
|
|
| 4) **Active Prompting with Chain-of-Thought for Large Language Models** - Active-Prompt - a prompting technique to adapt LLMs to different task-specific example prompts (annotated with human-designed chain-of-thought reasoning); this process involves finding where the LLM is most uncertain and annotating those. | [Paper](https://arxiv.org/abs/2302.12246), [Code](https://github.com/shizhediao/active-prompt) [Tweet](https://twitter.com/dair_ai/status/1629845541847724033?s=20) |
|
|
|
-| 5. **Modular Deep Learning** - Modular Deep Learning - a survey offering a unified view of the building blocks of modular neural networks; it also includes a discussion about modularity in the context of scaling LMs, causal inference, and other key topics in ML. | [Paper] (https://arxiv.org/abs/2302.11529) , [Project](https://www.ruder.io/modular-deep-learning/),[Tweet](https://twitter.com/dair_ai/status/1629845544037228551?s=20)
|
|
|
+| 5. **Modular Deep Learning** - Modular Deep Learning - a survey offering a unified view of the building blocks of modular neural networks; it also includes a discussion about modularity in the context of scaling LMs, causal inference, and other key topics in ML. | [Paper](https://arxiv.org/abs/2302.11529) , [Project](https://www.ruder.io/modular-deep-learning/), [Tweet](https://twitter.com/dair_ai/status/1629845544037228551?s=20)
|
|
|
| 6) **Recitation-Augmented Language Models** - Recitation-Augmented LMs - an approach that recites passages from the LLM’s own memory to produce final answers; shows high performance on knowledge-intensive tasks. | [Paper](https://arxiv.org/abs/2210.01296) , [Tweet](https://twitter.com/dair_ai/status/1629845546276995075?s=20) |
|
|
|
| 7) **Learning Performance-Improving Code Edits** - LLMs to Optimize Code - an approach that uses LLMs to suggest functionally correct, performance-improving code edits. | [Paper](https://arxiv.org/abs/2302.07867), [Tweet](https://twitter.com/dair_ai/status/1629845548210561029?s=20) |
|
|
|
| 8) **More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models** - Prompt Injection Threats - a comprehensive analysis of novel prompt injection threats to application-integrated LLMs. | [Paper](https://arxiv.org/abs/2302.12173), [Tweet](https://twitter.com/dair_ai/status/1629845550152523777?s=20) |
|