Browse Source

Update README.md

angysaravia 2 years ago
parent
commit
1b515404de
1 changed files with 1 additions and 1 deletions
  1. 1 1
      README.md

+ 1 - 1
README.md

@@ -12,7 +12,7 @@ Highlighting top ML papers every week.
 | 2. **VALL-E Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers** -- Microsoft introduces VALL-E, a text-to-audio model that performs state-of-the-art zero-shot performance; the text-to-speech synthesis task is treated as a conditional language modeling task:  | [Project](https://valle-demo.github.io/), [Tweet](https://twitter.com/dair_ai/status/1612153097962328067?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
 | 3. **Rethinking with Retrieval: Faithful Large Language Model Inference** -- A new paper shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting.  | [Paper](https://arxiv.org/abs/2301.00303), [Tweet](https://twitter.com/dair_ai/status/1612153100114055171?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
 | 4. **SPARSEGPT: Massive Language Models Can Be Accurately Pruned In One-Shot** -- Presents a technique for compressing large language models while not sacrificing performance; "pruned to at least 50% sparsity in one-shot, without any retraining."  | [Paper](https://arxiv.org/pdf/2301.00774.pdf) [Tweet](https://twitter.com/dair_ai/status/1612153102513360901?s=20&t=ChwZWzSmoRlZKnD54fsV6w)  |
-| 5. **ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders** -- ConvNeXt V2 is a performant model based on a fully convolutional masked autoencoder framework and other architectural improvements. CNNs are sticking back!  | [Paper](https://arxiv.org/abs/2301.00808), [Tweet](https://twitter.com/dair_ai/status/1612153104329281538?s=20&t=ChwZWzSmoRlZKnD54fsV6w)  |
+| 5. **ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders** -- ConvNeXt V2 is a performant model based on a fully convolutional masked autoencoder framework and other architectural improvements. CNNs are sticking back!  | [Paper](https://arxiv.org/abs/2301.00808), [Code](https://github.com/facebookresearch/convnext-v2), [Tweet](https://twitter.com/dair_ai/status/1612153104329281538?s=20&t=ChwZWzSmoRlZKnD54fsV6w)  |
 | 6. **Large Language Models as Corporate Lobbyists** -- With more capabilities, we are starting to see a wider range of applications with LLMs. This paper utilized large language models for conducting corporate lobbying activities.  | [Paper](https://arxiv.org/abs/2301.01181) , [Code](https://github.com/JohnNay/llm-lobbyist), [Tweet](https://twitter.com/dair_ai/status/1612153106355130372?s=20&t=ChwZWzSmoRlZKnD54fsV6w)  |
 | 7. **Superposition, Memorization, and Double Descent** -- This work aims to better understand how deep learning models overfit or memorize examples; interesting phenomena observed; important work toward a mechanistic theory of memorization.  | [Paper](https://transformer-circuits.pub/2023/toy-double-descent/index.html), [Tweet](https://twitter.com/dair_ai/status/1612153108460892160?s=20&t=ChwZWzSmoRlZKnD54fsV6w)  |
 | 8. **StitchNet: Composing Neural Networks from Pre-Trained Fragments** -- StitchNet: Interesting idea to create new coherent neural networks by reusing pretrained fragments of existing NNs. Not straightforward but there is potential in terms of efficiently reusing learned knowledge in pre-trained networks for complex tasks.  | [Paper](https://arxiv.org/abs/2301.01947), [Tweet](https://twitter.com/dair_ai/status/1612153110452903936?s=20&t=ChwZWzSmoRlZKnD54fsV6w)  |