Browse Source

Update README.md

angysaravia 2 years ago
parent
commit
bc9363116b
1 changed files with 1 additions and 1 deletions
  1. 1 1
      README.md

+ 1 - 1
README.md

@@ -10,7 +10,7 @@ We ❤️ reading ML papers so we've created this repo to highlight the top ML p
 | **Paper**  | **Links** |
 | ------------- | ------------- |
 | 1) **Symbolic Discovery of Optimization Algorithms** - Lion (EvoLved Sign Momentum) - a simple and effective optimization algorithm that’s more memory-efficient than Adam.   | [Paper](https://arxiv.org/abs/2302.06675), [Tweet](https://twitter.com/dair_ai/status/1627671313874575362?s=20)|
-| 2) *Transformer models: an introduction and catalog** - Transformer models: an introduction and catalog.  | [Paper](https://arxiv.org/abs/2302.07730), [Tweet](https://twitter.com/dair_ai/status/1627671315678126082?s=20) |
+| 2) **Transformer models**: an introduction and catalog** - Transformer models: an introduction and catalog.  | [Paper](https://arxiv.org/abs/2302.07730), [Tweet](https://twitter.com/dair_ai/status/1627671315678126082?s=20) |
 | 3) **3D-aware Conditional Image Synthesis** - pix2pix3D - a 3D-aware conditional generative model extended with neural radiance fields for controllable photorealistic image synthesis.| [Paper](xxx), [Project](https://www.cs.cmu.edu/~pix2pix3D/) [Tweet](https://twitter.com/dair_ai/status/1627671317355831296?s=20) |
 | 4) **The Capacity for Moral Self-Correction in Large Language Models** - Moral Self-Correction in Large Language Models - finds strong evidence that language models trained with RLHF have the capacity for moral self-correction. The capability emerges at 22B model parameters and typically improves with scale.  | [Paper](https://arxiv.org/abs/2302.07459), [Tweet](https://twitter.com/dair_ai/status/1627671319100768260?s=20)  |
 | 6) **xxxx** - Language Quantized AutoEncoders (LQAE) - an unsupervised method for text-image alignment that leverages pretrained language models; it enables few-shot image classification with LLMs.   | [Paper](https://arxiv.org/abs/2302.00902) , [Project](https://arxiv.org/abs/2302.00902), [Code](https://github.com/lhao499/lqae) [Tweet](https://twitter.com/haoliuhl/status/1625273748629901312?s=20)  |