|
@@ -2,7 +2,7 @@
|
|
### Highlighting top ML papers of the week. Top ML Papers of the Week (Jan 1-8):
|
|
### Highlighting top ML papers of the week. Top ML Papers of the Week (Jan 1-8):
|
|
|
|
|
|
| **Paper / Project** | **Link** |
|
|
| **Paper / Project** | **Link** |
|
|
-| ------------- | :---: |
|
|
|
|
|
|
+| ------------- | ------------- |
|
|
| 1. **Muse: Text-To-Image Generation via Masked Generative Transformers** -- GoogleAI introduces Muse, a new text-to-image generation model based on masked generative transformers; significantly more efficient than other diffusion models like Imagen and DALLE-2. | [Paper](https://arxiv.org/abs/2301.00704) [Project](https://muse-model.github.io/)|
|
|
| 1. **Muse: Text-To-Image Generation via Masked Generative Transformers** -- GoogleAI introduces Muse, a new text-to-image generation model based on masked generative transformers; significantly more efficient than other diffusion models like Imagen and DALLE-2. | [Paper](https://arxiv.org/abs/2301.00704) [Project](https://muse-model.github.io/)|
|
|
| 2. **VALL-E Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers** -- Microsoft introduces VALL-E, a text-to-audio model that performs state-of-the-art zero-shot performance; the text-to-speech synthesis task is treated as a conditional language modeling task: | [Project](https://valle-demo.github.io/) |
|
|
| 2. **VALL-E Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers** -- Microsoft introduces VALL-E, a text-to-audio model that performs state-of-the-art zero-shot performance; the text-to-speech synthesis task is treated as a conditional language modeling task: | [Project](https://valle-demo.github.io/) |
|
|
| 3. **Rethinking with Retrieval: Faithful Large Language Model Inference** -- A new paper shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting. | [Paper](https://arxiv.org/abs/2301.00303) |
|
|
| 3. **Rethinking with Retrieval: Faithful Large Language Model Inference** -- A new paper shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting. | [Paper](https://arxiv.org/abs/2301.00303) |
|