|
@@ -15,7 +15,7 @@ We ❤️ reading ML papers so we've created this repo to highlight the top ML p
|
|
|
| **Paper** | **Links** |
|
|
|
| ------------- | ------------- |
|
|
|
| 1) **GPT-4 Technical Report** - GPT-4 - a large multimodal model with broader general knowledge and problem-solving abilities. | [Paper](https://arxiv.org/abs/2303.08774v2), [Tweet](https://twitter.com/dair_ai/status/1637456913993433089?s=20)|
|
|
|
-| 2) **LERF: Language Embedded Radiance Fields** - LERF (Language Embedded Radiance Fields) - a method for grounding language embeddings from models like CLIP into NeRF; this enables open-ended language queries in 3D. | [Paper](https://arxiv.org/abs/2303.09553), [Data](https://drive.google.com/drive/folders/1vh0mSl7v29yaGsxleadcj-LCZOE_WEWB?usp=sharing), [Tweet](https://twitter.com/dair_ai/status/1637456915658686465?s=20) |
|
|
|
+| 2) **LERF: Language Embedded Radiance Fields** - LERF (Language Embedded Radiance Fields) - a method for grounding language embeddings from models like CLIP into NeRF; this enables open-ended language queries in 3D. | [Paper](https://arxiv.org/abs/2303.09553), [Tweet](https://twitter.com/dair_ai/status/1637456915658686465?s=20) |
|
|
|
| 3) **An Overview on Language Models: Recent Developments and Outlook** - An Overview of Language Models - an overview of language models covering recent developments and future directions. It also covers topics like linguistic units, structures, training methods, evaluation, and applications. | [Paper](https://arxiv.org/abs/2303.05759), [Tweet](https://twitter.com/omarsar0/status/1635273656858460162?s=20) |
|
|
|
| 4) **Eliciting Latent Predictions from Transformers with the Tuned Lens** - Tuned Lens - a method for transformer interpretability that can trace a language model predictions as it develops layer by layer. | [Paper](https://arxiv.org/abs/2303.08112), [Tweet](https://twitter.com/dair_ai/status/1637456919819440130?s=20) |
|
|
|
| 5. **Meet in the Middle: A New Pre-training Paradigm** - MIM (Meet in the Middle) - a new pre-training paradigm using techniques that jointly improve training data efficiency and capabilities of LMs in the infilling task; performance improvement is shown in code generation tasks. | [Paper](https://arxiv.org/abs/2303.07295) , [Tweet](https://twitter.com/dair_ai/status/1637456922004561920?s=20)
|