瀏覽代碼

recipes/quickstart folder updated

Pia Papanna 10 月之前
父節點
當前提交
4344a420f2
共有 35 個文件被更改,包括 22 次插入29 次删除
  1. 6 6
      UPDATES.md
  2. 1 1
      docs/FAQ.md
  3. 1 1
      docs/multi_gpu.md
  4. 0 0
      recipes/3p_integrations/README.md
  5. 1 7
      recipes/inference/model_servers/hf_text_generation_inference/README.md
  6. 0 0
      recipes/3p_integrations/hf_text_generation_inference/merge_lora_weights.py
  7. 4 5
      recipes/inference/model_servers/llama-on-prem.md
  8. 0 0
      recipes/3p_integrations/vllm/inference.py
  9. 2 2
      recipes/README.md
  10. 0 0
      recipes/quickstart/RAG/hello_llama_cloud.ipynb
  11. 0 0
      recipes/quickstart/finetuning/LLM_finetuning_overview.md
  12. 0 0
      recipes/quickstart/finetuning/README.md
  13. 1 1
      recipes/finetuning/datasets/README.md
  14. 0 0
      recipes/quickstart/finetuning/datasets/custom_dataset.py
  15. 0 0
      recipes/quickstart/finetuning/finetuning.py
  16. 0 0
      recipes/quickstart/finetuning/multi_node.slurm
  17. 0 0
      recipes/quickstart/finetuning/multigpu_finetuning.md
  18. 0 0
      recipes/quickstart/finetuning/quickstart_peft_finetuning.ipynb
  19. 0 0
      recipes/quickstart/finetuning/singlegpu_finetuning.md
  20. 0 0
      recipes/quickstart/inference/code_llama/README.md
  21. 0 0
      recipes/quickstart/inference/code_llama/code_completion_example.py
  22. 0 0
      recipes/quickstart/inference/code_llama/code_completion_prompt.txt
  23. 0 0
      recipes/quickstart/inference/code_llama/code_infilling_example.py
  24. 0 0
      recipes/quickstart/inference/code_llama/code_infilling_prompt.txt
  25. 0 0
      recipes/quickstart/inference/code_llama/code_instruct_example.py
  26. 3 3
      recipes/inference/local_inference/README.md
  27. 0 0
      recipes/quickstart/inference/local_inference/chat_completion/chat_completion.py
  28. 0 0
      recipes/quickstart/inference/local_inference/chat_completion/chats.json
  29. 0 0
      recipes/quickstart/inference/local_inference/inference.py
  30. 0 0
      recipes/quickstart/inference/local_inference/samsum_prompt.txt
  31. 0 0
      recipes/quickstart/inference/mobile_inference/android_inference/README.md
  32. 0 0
      recipes/quickstart/inference/mobile_inference/android_inference/mlc-package-config.json
  33. 0 0
      recipes/quickstart/inference/mobile_inference/android_inference/requirements.txt
  34. 2 2
      src/tests/datasets/test_custom_dataset.py
  35. 1 1
      tools/benchmarks/inference/on_prem/README.md

+ 6 - 6
UPDATES.md

@@ -1,19 +1,19 @@
 ## System Prompt Update
 
 ### Observed Issue
-We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses. 
+We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses.
 
 ### Updated approach
-Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting.  Pull request [#626](https://github.com/facebookresearch/llama/pull/626) removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it. 
+Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting.  Pull request [#626](https://github.com/facebookresearch/llama/pull/626) removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it.
 
 ## Token Sanitization Update
 
 ### Observed Issue
-The PyTorch scripts currently provided for tokenization and model inference allow for direct prompt injection via string concatenation. Prompt injections allow for the addition of special system and instruction prompt strings from user-provided prompts. 
+The PyTorch scripts currently provided for tokenization and model inference allow for direct prompt injection via string concatenation. Prompt injections allow for the addition of special system and instruction prompt strings from user-provided prompts.
 
-As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use. 
+As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use.
 
 ### Updated approach
-We recommend sanitizing [these strings](https://github.com/meta-llama/llama?tab=readme-ov-file#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this. 
+We recommend sanitizing [these strings](https://github.com/meta-llama/llama?tab=readme-ov-file#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this.
 
-Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](./recipes/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.
+Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](./recipes/quickstart/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.

+ 1 - 1
docs/FAQ.md

@@ -16,7 +16,7 @@ Here we discuss frequently asked questions that may occur and we found useful al
 
 4. Can I add custom datasets?
 
-    Yes, you can find more information on how to do that [here](../recipes/finetuning/datasets/README.md).
+    Yes, you can find more information on how to do that [here](../recipes/quickstart/finetuning/datasets/README.md).
 
 5. What are the hardware SKU requirements for deploying these models?
 

+ 1 - 1
docs/multi_gpu.md

@@ -9,7 +9,7 @@ To run fine-tuning on multi-GPUs, we will  make use of two packages:
 Given the combination of PEFT and FSDP, we would be able to fine tune a Meta Llama 3 8B model on multiple GPUs in one node or multi-node.
 
 ## Requirements
-To run the examples, make sure to install the llama-recipes package and clone the github repository in order to use the provided [`finetuning.py`](../recipes/finetuning/finetuning.py) script with torchrun (See [README.md](../README.md) for details).
+To run the examples, make sure to install the llama-recipes package and clone the github repository in order to use the provided [`finetuning.py`](../recipes/quickstart/finetuning/finetuning.py) script with torchrun (See [README.md](../README.md) for details).
 
 **Please note that the llama_recipes package will install PyTorch 2.0.1 version, in case you want to run FSDP + PEFT, please make sure to install PyTorch nightlies.**
 

recipes/inference/model_servers/README.md → recipes/3p_integrations/README.md


+ 1 - 7
recipes/inference/model_servers/hf_text_generation_inference/README.md

@@ -2,7 +2,7 @@
 
 This document shows how to serve a fine tuned Llama mode with HuggingFace's text-generation-inference server. This option is currently only available for models that were trained using the LoRA method or without using the `--use_peft` argument.
 
-## Step 0: Merging the weights (Only required if LoRA method was used) 
+## Step 0: Merging the weights (Only required if LoRA method was used)
 
 In case the model was fine tuned with LoRA method we need to merge the weights of the base model with the adapter weight. For this we can use the script `merge_lora_weights.py` which is located in the same folder as this README file.
 
@@ -40,9 +40,3 @@ curl 127.0.0.1:8080/generate_stream \
 ```
 
 Further information can be found in the documentation of the [hf text-generation-inference](https://github.com/huggingface/text-generation-inference) solution.
-
-
-
-
-
-

recipes/inference/model_servers/hf_text_generation_inference/merge_lora_weights.py → recipes/3p_integrations/hf_text_generation_inference/merge_lora_weights.py


文件差異過大導致無法顯示
+ 4 - 5
recipes/inference/model_servers/llama-on-prem.md


recipes/inference/model_servers/vllm/inference.py → recipes/3p_integrations/vllm/inference.py


+ 2 - 2
recipes/README.md

@@ -4,8 +4,8 @@ This folder contains examples organized by topic:
 |---|---|
 [quickstart](./quickstart)|The "Hello World" of using Llama 3, start here if you are new to using Llama 3
 [multilingual](./multilingual)|Scripts to add a new language to Llama
-[finetuning](./finetuning)|Scripts to finetune Llama 3 on single-GPU and multi-GPU setups
-[inference](./inference)|Scripts to deploy Llama 3 for inference [locally](./inference/local_inference/), on mobile [Android](./inference/mobile_inference/android_inference/) and using [model servers](./inference/mobile_inference/)
+[finetuning](./quickstart/finetuning)|Scripts to finetune Llama 3 on single-GPU and multi-GPU setups
+[inference](./quickstart/inference)|Scripts to deploy Llama 3 for inference [locally](./quickstart/inference/local_inference/), on mobile [Android](./quickstart/inference/mobile_inference/android_inference/) and using [model servers](./quickstart/inference/mobile_inference/)
 [use_cases](./use_cases)|Scripts showing common applications of Llama 3
 [responsible_ai](./responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs
 [llama_api_providers](./llama_api_providers)|Scripts to run inference on Llama via hosted endpoints

recipes/use_cases/RAG/HelloLlamaCloud.ipynb → recipes/quickstart/RAG/hello_llama_cloud.ipynb


recipes/finetuning/LLM_finetuning_overview.md → recipes/quickstart/finetuning/LLM_finetuning_overview.md


recipes/finetuning/README.md → recipes/quickstart/finetuning/README.md


文件差異過大導致無法顯示
+ 1 - 1
recipes/finetuning/datasets/README.md


recipes/finetuning/datasets/custom_dataset.py → recipes/quickstart/finetuning/datasets/custom_dataset.py


recipes/finetuning/finetuning.py → recipes/quickstart/finetuning/finetuning.py


recipes/finetuning/multi_node.slurm → recipes/quickstart/finetuning/multi_node.slurm


recipes/finetuning/multigpu_finetuning.md → recipes/quickstart/finetuning/multigpu_finetuning.md


recipes/finetuning/quickstart_peft_finetuning.ipynb → recipes/quickstart/finetuning/quickstart_peft_finetuning.ipynb


recipes/finetuning/singlegpu_finetuning.md → recipes/quickstart/finetuning/singlegpu_finetuning.md


recipes/code_llama/README.md → recipes/quickstart/inference/code_llama/README.md


recipes/code_llama/code_completion_example.py → recipes/quickstart/inference/code_llama/code_completion_example.py


recipes/code_llama/code_completion_prompt.txt → recipes/quickstart/inference/code_llama/code_completion_prompt.txt


recipes/code_llama/code_infilling_example.py → recipes/quickstart/inference/code_llama/code_infilling_example.py


recipes/code_llama/code_infilling_prompt.txt → recipes/quickstart/inference/code_llama/code_infilling_prompt.txt


recipes/code_llama/code_instruct_example.py → recipes/quickstart/inference/code_llama/code_instruct_example.py


+ 3 - 3
recipes/inference/local_inference/README.md

@@ -69,7 +69,7 @@ In case you have fine-tuned your model with pure FSDP and saved the checkpoints
 This is helpful if you have fine-tuned you model using FSDP only as follows:
 
 ```bash
-torchrun --nnodes 1 --nproc_per_node 8  recipes/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/7B --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --pure_bf16
+torchrun --nnodes 1 --nproc_per_node 8  recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/7B --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --pure_bf16
 ```
 Then convert your FSDP checkpoint to HuggingFace checkpoints using:
 ```bash
@@ -82,6 +82,6 @@ By default, training parameter are saved in `train_params.yaml` in the path wher
 Then run inference using:
 
 ```bash
-python inference.py --model_name <training_config.output_dir> --prompt_file <test_prompt_file> 
+python inference.py --model_name <training_config.output_dir> --prompt_file <test_prompt_file>
 
-```
+```

recipes/inference/local_inference/chat_completion/chat_completion.py → recipes/quickstart/inference/local_inference/chat_completion/chat_completion.py


recipes/inference/local_inference/chat_completion/chats.json → recipes/quickstart/inference/local_inference/chat_completion/chats.json


recipes/inference/local_inference/inference.py → recipes/quickstart/inference/local_inference/inference.py


recipes/inference/local_inference/samsum_prompt.txt → recipes/quickstart/inference/local_inference/samsum_prompt.txt


recipes/inference/mobile_inference/android_inference/README.md → recipes/quickstart/inference/mobile_inference/android_inference/README.md


recipes/inference/mobile_inference/android_inference/mlc-package-config.json → recipes/quickstart/inference/mobile_inference/android_inference/mlc-package-config.json


recipes/inference/mobile_inference/android_inference/requirements.txt → recipes/quickstart/inference/mobile_inference/android_inference/requirements.txt


+ 2 - 2
src/tests/datasets/test_custom_dataset.py

@@ -51,7 +51,7 @@ def test_custom_dataset(step_lr, optimizer, get_model, tokenizer, train, mocker,
     kwargs = {
         "dataset": "custom_dataset",
         "model_name": llama_version,
-        "custom_dataset.file": "recipes/finetuning/datasets/custom_dataset.py",
+        "custom_dataset.file": "recipes/quickstart/finetuning/datasets/custom_dataset.py",
         "custom_dataset.train_split": "validation",
         "batch_size_training": 2,
         "val_batch_size": 4,
@@ -108,7 +108,7 @@ def test_unknown_dataset_error(step_lr, optimizer, tokenizer, get_model, train,
 
     kwargs = {
         "dataset": "custom_dataset",
-        "custom_dataset.file": "recipes/finetuning/datasets/custom_dataset.py:get_unknown_dataset",
+        "custom_dataset.file": "recipes/quickstart/finetuning/datasets/custom_dataset.py:get_unknown_dataset",
         "batch_size_training": 1,
         "use_peft": False,
         }

+ 1 - 1
tools/benchmarks/inference/on_prem/README.md

@@ -7,7 +7,7 @@ We support benchmark on these serving framework:
 
 # vLLM - Getting Started
 
-To get started, we first need to deploy containers on-prem as a API host. Follow the guidance [here](../../../inference/model_servers/llama-on-prem.md#setting-up-vllm-with-llama-3) to deploy vLLM on-prem.
+To get started, we first need to deploy containers on-prem as a API host. Follow the guidance [here](../../../3p_integration/llama-on-prem.md#setting-up-vllm-with-llama-3) to deploy vLLM on-prem.
 
 Note that in common scenario which overall throughput is important, we suggest you prioritize deploying as many model replicas as possible to reach higher overall throughput and request-per-second (RPS), comparing to deploy one model container among multiple GPUs for model parallelism. Additionally, as deploying multiple model replicas, there is a need for a higher level wrapper to handle the load balancing which here has been simulated in the benchmark scripts.
 For example, we have an instance from Azure that has 8xA100 80G GPUs, and we want to deploy the Meta Llama 3 70B instruct model, which is around 140GB with FP16. So for deployment we can do: