| .. | 
		
		
			
			
			
				
					| __init__.py | 207d2f80e9
					Make code-llama and hf-tgi inference runnable as module | 2 سال پیش | 
		
			
			
			
				
					| chat_utils.py | 6d9d48d619
					Use apply_chat_template instead of custom functions | 1 سال پیش | 
		
			
			
			
				
					| checkpoint_converter_fsdp_hf.py | ce9501f22c
					remove relative imports | 2 سال پیش | 
		
			
			
			
				
					| llm.py | a404c9249c
					Notebook to demonstrate using llama and llama-guard together using OctoAI | 1 سال پیش | 
		
			
			
			
				
					| model_utils.py | d51d2cce9c
					adding sdpa for flash attn | 1 سال پیش | 
		
			
			
			
				
					| prompt_format_utils.py | bcdb5b31fe
					Fixing quantization config. Removing prints | 1 سال پیش | 
		
			
			
			
				
					| safety_utils.py | f63ba19827
					Fixing tokenizer used for llama 3. Changing quantization configs on safety_utils. | 1 سال پیش |