Set numpy seed in finetuning.py to fix it during finetuning (including in custom_dataset.py) and have it set in functions such as Dataset.train_test_split. This avoids having different train/test splits in different ranks, which may cause NCCL collective operation timeout errors.
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|