|  Matthias Reso | 43cb6a2db4
							
							Remove check for nighlies for low_cpu_fsdp and bump torch version to 2.2 instead | преди 1 година | 
				
					
						|  Matthias Reso | cad284c66f
							
							Replace new model url | преди 1 година | 
				
					
						|  Matthias Reso | 8b0a233c1a
							
							Use new chat format in custom dataset | преди 1 година | 
				
					
						|  Matthias Reso | 83fae41195
							
							Add test for chat completion formatting | преди 1 година | 
				
					
						|  Matthias Reso | 6d9d48d619
							
							Use apply_chat_template instead of custom functions | преди 1 година | 
				
					
						|  Matthias Reso | 5efea160a2
							
							Adapt test_finetuning to new model | преди 1 година | 
				
					
						|  Matthias Reso | 739483f262
							
							Adjust test_grammar_datasets to stable sort | преди 1 година | 
				
					
						|  Matthias Reso | b96e435cda
							
							Adjust test_samsum_dataset to second model | преди 1 година | 
				
					
						|  Matthias Reso | fac41298b0
							
							Adapt test_custom_dataset to new model | преди 1 година | 
				
					
						|  Matthias Reso | 960014a3bb
							
							Fix test_custom_dataset by introducing a stable sort algorithm | преди 1 година | 
				
					
						|  Matthias Reso | b5583b31d5
							
							Adapt test_grammar_dataset to new model | преди 1 година | 
				
					
						|  Matthias Reso | 17a6d16289
							
							Test batching for both llama versions | преди 1 година | 
				
					
						|  Matthias Reso | a414ca6a57
							
							Update chat format for llama3 | преди 1 година | 
				
					
						|  Matthias Reso | 113ea18bf1
							
							Replace LlamaTokenizer with AutoTokenizer | преди 1 година | 
				
					
						|  Hamid Shojanazeri | aaa9e2c863
							
							Adding a feature that will stop the training/eval process after reaching some max_steps (#428) | преди 1 година | 
				
					
						|  Kai Wu | e6f69f84ad
							
							add max_steps_reached to reduce redundancy | преди 1 година | 
				
					
						|  Kai Wu | 362cda0fa6
							
							fixing test_gradient_accumulation and test_save_to_json | преди 1 година | 
				
					
						|  Kai Wu | fa0a389f74
							
							add max_step feature for training and eval | преди 1 година | 
				
					
						|  Hamid Shojanazeri | 37c8f72211
							
							Update location and name of llm.py example notebook (#417) | преди 1 година | 
				
					
						|  Thomas Robinson | 79266217ef
							
							Update location and name of llm.py example notebook | преди 1 година | 
				
					
						|  Hamid Shojanazeri | f7aa02af9f
							
							only save training params on rank 0 (#415) | преди 1 година | 
				
					
						|  jpgard | 6954b16b3b
							
							only save training params on rank 0 | преди 1 година | 
				
					
						|  Hamid Shojanazeri | 64e189914f
							
							update due to peft new release (#407) | преди 1 година | 
				
					
						|  Hamid Shojanazeri | 11f51db28c
							
							adding the kbit prep in the code | преди 1 година | 
				
					
						|  Hamid Shojanazeri | f058ff6ccd
							
							update due to peft new release | преди 1 година | 
				
					
						|  Hamid Shojanazeri | 6a7478a6aa
							
							Reorg inference throughput folder structure (#404) | преди 1 година | 
				
					
						|  Chester Hu | 367e4869ac
							
							Reorg inference throughput folder structure | преди 1 година | 
				
					
						|  Hamid Shojanazeri | d6eb83f6c5
							
							Add llm class so that externally-hosted models can be called (#398) | преди 1 година | 
				
					
						|  Thomas Robinson | 0346d0d5b8
							
							Add documentation and examples | преди 1 година | 
				
					
						|  Hamid Shojanazeri | 43a1e5cdb0
							
							Fix dead links after directory structure refactor (#397) | преди 1 година |