|  Suraj | acf0a74297
							
							Updates for commit review | 1 year ago | 
				
					
						|  varunfb | 811e09d022
							
							Merge pull request #4 from meta-llama/LG3notebook | 1 year ago | 
				
					
						|  Thomas Robinson | f6ad82a976
							
							Correct model card link in llama_guard_customization_via_prompting_and_fine_tuning.ipynb | 1 year ago | 
				
					
						|  Thomas Robinson | c13f636919
							
							Updated and renamed llama_guard_customization_via_prompting_and_fine_tuning.ipynb based on feedback in PR | 1 year ago | 
				
					
						|  Suraj | a46d0422cf
							
							Add note about LG3 finetuning notebook | 1 year ago | 
				
					
						|  Matthias Reso | 00e0b0be6c
							
							Apply suggestions from code review | 1 year ago | 
				
					
						|  Matthias Reso | 190d543b53
							
							Add fp8 references | 1 year ago | 
				
					
						|  Matthias Reso | c167945448
							
							remove 405B ft doc | 1 year ago | 
				
					
						|  Matthias Reso | b0b4e16aec
							
							Update docs/multi_gpu.md | 1 year ago | 
				
					
						|  Suraj | a81524c27c
							
							spellcheck appeasement | 1 year ago | 
				
					
						|  Suraj | 7296833d43
							
							Add codeshield to requirements | 1 year ago | 
				
					
						|  Suraj | 7cac948093
							
							Update special tokens table and URL | 1 year ago | 
				
					
						|  Suraj | 88167d59ca
							
							Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into main | 1 year ago | 
				
					
						|  Suraj | a9e8f810e7
							
							Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into hf_model_id | 1 year ago | 
				
					
						|  Matthias Reso | e2f77dbc21
							
							fix quant config | 1 year ago | 
				
					
						|  Matthias Reso | 6ef9a78458
							
							Fix issues with quantization_config == None | 1 year ago | 
				
					
						|  Matthias Reso | b319a9fb8c
							
							Fix lint issue | 1 year ago | 
				
					
						|  Matthias Reso | a3fd369127
							
							Ref from infernce recipes to vllm for 405B | 1 year ago | 
				
					
						|  Matthias Reso | a8f2267324
							
							Added multi node doc to multigpu_finetuning.md | 1 year ago | 
				
					
						|  Matthias Reso | afb3b75892
							
							Add 405B + QLoRA + FSDP to multi_gpu.md doc | 1 year ago | 
				
					
						|  Matthias Reso | 939c88fb04
							
							Add 405B + QLoRA ref to LLM finetung | 1 year ago | 
				
					
						|  Matthias Reso | d2fd9c163a
							
							Added doc for multi-node vllm inference | 1 year ago | 
				
					
						|  Thomas Robinson | 1a183c0a5e
							
							Introduce Llama guard customization notebook and associated dataset loader example | 1 year ago | 
				
					
						|  Cyrus Nikolaidis | 301e51a340
							
							Merge branch 'main' of github.com:meta-llama/llama-recipes-alpha | 1 year ago | 
				
					
						|  Cyrus Nikolaidis | 883def17f0
							
							Prompt Guard Inference for long strings | 1 year ago | 
				
					
						|  Suraj Subramanian | 0d00616b34
							
							Move MediaGen notebook to octoai folder (#601) | 1 year ago | 
				
					
						|  Suraj Subramanian | 5a9858f0f0
							
							Update README.md to remove mediagen reference | 1 year ago | 
				
					
						|  Suraj Subramanian | 5a878654ec
							
							Move MediaGen notebook to octoai folder | 1 year ago | 
				
					
						|  Suraj | 4be3eb0d17
							
							Updates HF model_ids and readmes for 3.1 | 1 year ago | 
				
					
						|  Matthias Reso | c9ae014459
							
							Enable pipeline parallelism through use of AsyncLLMEngine in vllm inferecen + enable use of lora adapter | 1 year ago |