浏览代码

fix src links

Sanyam Bhutani 3 月之前
父节点
当前提交
7e9cab0c43
共有 2 个文件被更改,包括 12 次插入11 次删除
  1. 6 6
      src/docs/multi_gpu.md
  2. 6 5
      src/docs/single_gpu.md

+ 6 - 6
src/docs/multi_gpu.md

@@ -86,9 +86,9 @@ sbatch recipes/quickstart/finetuning/multi_node.slurm
 
 ## How to run with different datasets?
 
-Currently 4 datasets are supported that can be found in [Datasets config file](../src/llama_recipes/configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../llama_recipes/configs/datasets.py).
 
-* `grammar_dataset` : use this [notebook](../src/llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
+* `grammar_dataset` : use this [notebook](../llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
 * `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `dataset` folder.
 
@@ -117,7 +117,7 @@ torchrun --nnodes 1 --nproc_per_node 4  recipes/quickstart/finetuning/finetuning
 
 ## Where to configure settings?
 
-* [Training config file](../src/llama_recipes/configs/training.py) is the main config file that helps to specify the settings for our run and can be found in [configs folder](../src/llama_recipes/configs/)
+* [Training config file](../llama_recipes/configs/training.py) is the main config file that helps to specify the settings for our run and can be found in [configs folder](../src/llama_recipes/configs/)
 
 It lets us specify the training settings for everything from `model_name` to `dataset_name`, `batch_size` and so on. Below is the list of supported settings:
 
@@ -166,11 +166,11 @@ It lets us specify the training settings for everything from `model_name` to `da
     profiler_dir: str = "PATH/to/save/profiler/results" # will be used if using profiler
 ```
 
-* [Datasets config file](../src/llama_recipes/configs/datasets.py) provides the available options for datasets.
+* [Datasets config file](../llama_recipes/configs/datasets.py) provides the available options for datasets.
 
-* [peft config file](../src/llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
+* [peft config file](../llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
 
-* [FSDP config file](../src/llama_recipes/configs/fsdp.py) provides FSDP settings such as:
+* [FSDP config file](../llama_recipes/configs/fsdp.py) provides FSDP settings such as:
 
     * `mixed_precision` boolean flag to specify using mixed precision, defatults to true.
 

+ 6 - 5
src/docs/single_gpu.md

@@ -35,9 +35,9 @@ The args used in the command above are:
 
 ## How to run with different datasets?
 
-Currently 4 datasets are supported that can be found in [Datasets config file](../src/llama_recipes/configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../llama_recipes/configs/datasets.py).
 
-* `grammar_dataset` : use this [notebook](../src/llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
+* `grammar_dataset` : use this [notebook](../llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
 * `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `ft_dataset` folder.
 
@@ -67,7 +67,7 @@ python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization
 
 ## Where to configure settings?
 
-* [Training config file](../src/llama_recipes/configs/training.py) is the main config file that help to specify the settings for our run can be found in
+* [Training config file](../llama_recipes/configs/training.py) is the main config file that help to specify the settings for our run can be found in
 
 It let us specify the training settings, everything from `model_name` to `dataset_name`, `batch_size` etc. can be set here. Below is the list of supported settings:
 
@@ -117,9 +117,10 @@ It let us specify the training settings, everything from `model_name` to `datase
 
 ```
 
-* [Datasets config file](../src/llama_recipes/configs/datasets.py) provides the available options for datasets.
+* [Datasets config file](../llama_recipes/configs/datasets.py)
+    ../src/llama_recipes/configs/datasets.py) provides the available options for datasets.
 
-* [peft config file](../src/llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
+* [peft config file](../llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
 
 ## FLOPS Counting and Pytorch Profiling