|
@@ -521,7 +521,7 @@ At DAIR.AI we ❤️ reading ML papers so we've created this repo to highlight t
|
|
|
| ------------- | ------------- |
|
|
|
| 1) **Symbolic Discovery of Optimization Algorithms** - a simple and effective optimization algorithm that’s more memory-efficient than Adam. | [Paper](https://arxiv.org/abs/2302.06675), [Tweet](https://twitter.com/dair_ai/status/1627671313874575362?s=20)|
|
|
|
| 2) **Transformer models: an introduction and catalog** | [Paper](https://arxiv.org/abs/2302.07730), [Tweet](https://twitter.com/dair_ai/status/1627671315678126082?s=20) |
|
|
|
-| 3) **3D-aware Conditional Image Synthesis** - a 3D-aware conditional generative model extended with neural radiance fields for controllable photorealistic image synthesis.| [Paper](xxx), [Project](https://www.cs.cmu.edu/~pix2pix3D/) [Tweet](https://twitter.com/dair_ai/status/1627671317355831296?s=20) |
|
|
|
+| 3) **3D-aware Conditional Image Synthesis** - a 3D-aware conditional generative model extended with neural radiance fields for controllable photorealistic image synthesis.| [Project](https://www.cs.cmu.edu/~pix2pix3D/) [Tweet](https://twitter.com/dair_ai/status/1627671317355831296?s=20) |
|
|
|
| 4) **The Capacity for Moral Self-Correction in Large Language Models** - finds strong evidence that language models trained with RLHF have the capacity for moral self-correction. The capability emerges at 22B model parameters and typically improves with scale. | [Paper](https://arxiv.org/abs/2302.07459), [Tweet](https://twitter.com/dair_ai/status/1627671319100768260?s=20) |
|
|
|
| 5) **Vision meets RL** - uses reinforcement learning to align computer vision models with task rewards; observes large performance boost across multiple CV tasks such as object detection and colorization. | [Paper](https://arxiv.org/abs/2302.08242) |
|
|
|
| 6) **Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment** - an unsupervised method for text-image alignment that leverages pretrained language models; it enables few-shot image classification with LLMs. | [Paper](https://arxiv.org/abs/2302.00902) , [Code](https://github.com/lhao499/lqae) [Tweet](https://twitter.com/haoliuhl/status/1625273748629901312?s=20) |
|