|  Matthias Reso | 8b0a233c1a
							
							Use new chat format in custom dataset | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 83fae41195
							
							Add test for chat completion formatting | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 6d9d48d619
							
							Use apply_chat_template instead of custom functions | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 5efea160a2
							
							Adapt test_finetuning to new model | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 739483f262
							
							Adjust test_grammar_datasets to stable sort | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | b96e435cda
							
							Adjust test_samsum_dataset to second model | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | fac41298b0
							
							Adapt test_custom_dataset to new model | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 960014a3bb
							
							Fix test_custom_dataset by introducing a stable sort algorithm | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | b5583b31d5
							
							Adapt test_grammar_dataset to new model | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 17a6d16289
							
							Test batching for both llama versions | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | a414ca6a57
							
							Update chat format for llama3 | 1 gadu atpakaļ | 
				
					
						|  Matthias Reso | 113ea18bf1
							
							Replace LlamaTokenizer with AutoTokenizer | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | aaa9e2c863
							
							Adding a feature that will stop the training/eval process after reaching some max_steps (#428) | 1 gadu atpakaļ | 
				
					
						|  Kai Wu | e6f69f84ad
							
							add max_steps_reached to reduce redundancy | 1 gadu atpakaļ | 
				
					
						|  Kai Wu | 362cda0fa6
							
							fixing test_gradient_accumulation and test_save_to_json | 1 gadu atpakaļ | 
				
					
						|  Kai Wu | fa0a389f74
							
							add max_step feature for training and eval | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | 37c8f72211
							
							Update location and name of llm.py example notebook (#417) | 1 gadu atpakaļ | 
				
					
						|  Thomas Robinson | 79266217ef
							
							Update location and name of llm.py example notebook | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | f7aa02af9f
							
							only save training params on rank 0 (#415) | 1 gadu atpakaļ | 
				
					
						|  jpgard | 6954b16b3b
							
							only save training params on rank 0 | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | 64e189914f
							
							update due to peft new release (#407) | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | 11f51db28c
							
							adding the kbit prep in the code | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | f058ff6ccd
							
							update due to peft new release | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | 6a7478a6aa
							
							Reorg inference throughput folder structure (#404) | 1 gadu atpakaļ | 
				
					
						|  Chester Hu | 367e4869ac
							
							Reorg inference throughput folder structure | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | d6eb83f6c5
							
							Add llm class so that externally-hosted models can be called (#398) | 1 gadu atpakaļ | 
				
					
						|  Thomas Robinson | 0346d0d5b8
							
							Add documentation and examples | 1 gadu atpakaļ | 
				
					
						|  Hamid Shojanazeri | 43a1e5cdb0
							
							Fix dead links after directory structure refactor (#397) | 1 gadu atpakaļ | 
				
					
						|  Suraj Subramanian | e2a35420c0
							
							Remove octoai link that is 401-ing | 1 gadu atpakaļ | 
				
					
						|  Suraj Subramanian | 12602f32e2
							
							Merge branch 'main' into subramen-patch-deadlinks | 1 gadu atpakaļ |