Sanyam Bhutani 3 месяцев назад
Родитель
Сommit
0aab5b593a

Разница между файлами не показана из-за своего большого размера
+ 1 - 1
3p-integrations/togetherai/README.md


+ 2 - 2
end-to-end-use-cases/Multi-Modal-RAG/README.md

@@ -13,7 +13,7 @@ This is a complete workshop on how to label images using the new Llama 3.2-Visio
 Before we start:
 
 1. Please grab your HF CLI Token from [here](https://huggingface.co/settings/tokens)
-2. Git clone [this dataset](https://huggingface.co/datasets/Sanyam/MM-Demo) inside the Multi-Modal-RAG folder: `git clone https://huggingface.co/datasets/Sanyam/MM-Demo`
+2. Git clone [this dataset](https://huggingface.co/datasets/Sanyam/MM-Demo) inside the Multi-Modal-RAG folder: `git clone https://huggingface.co/datasets/Sanyam/MM-Demo` (Remember to thank the original author by upvoting [Kaggle Dataset](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full))
 3. Make sure you grab a together.ai token [here](https://www.together.ai)
 
 ## Detailed Outline for running:
@@ -32,7 +32,7 @@ Here's the detailed outline:
 
 In this step we start with an unlabeled dataset and use the image captioning capability of the model to write a description of the image and categorize it.
 
-[Notebook for Step 1](./notebooks/Part_1_Data_Preperation.ipynb) and [Script for Step 1](./scripts/label_script.py)
+[Notebook for Step 1](./notebooks/Part_1_Data_Preparation.ipynb) and [Script for Step 1](./scripts/label_script.py)
 
 To run the script (remember to set n):
 ```

+ 2 - 2
end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preperation.ipynb

@@ -5,9 +5,9 @@
    "id": "01af3b74-b3b9-4c1f-b41d-2911e7f19ffe",
    "metadata": {},
    "source": [
-    "## Data Preperation Notebook\n",
+    "## Data Preparation Notebook\n",
     "\n",
-    "To make the experience consistent, we will use [this link]() for getting access to our dataset. To credit, thanks to the author [here]() for making it available. \n",
+    "To make the experience consistent, we will use [this link](https://huggingface.co/datasets/Sanyam/MM-Demo) for getting access to our dataset. To credit, thanks to the author [here](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full) for making it available. \n",
     "\n",
     "As thanks to original author-Please upvote the dataset version on Kaggle if you enjoy this course."
    ]