Allen 115e9306d5 Update cache.py 1 anno fa
..
data 864f0d9a67 benchmark_summarization 1 anno fa
utils 115e9306d5 Update cache.py 1 anno fa
README.md 9fb1080e17 test 1 anno fa
exp.sh 9fb1080e17 test 1 anno fa
generation.py cedb89b064 Update generation.py 1 anno fa
setup.sh 4c08c68f80 Create setup.sh 1 anno fa
utils_cache.py 428e8e83ed test 1 anno fa
utils_llama.py 428e8e83ed test 1 anno fa

README.md

Run Llama with H2O for long context inference

The following example runs inference of Llama-2-7b on XSUM summarization tasks. We're using --enable_h2o_generation to enable H2O algorithm that only keeps heavy-hitter and the local KV pairs. Use --num_heavy_hitter_tokens to decide the number of heavy-hitter KV pairs and --num_window_lengthfor the KV cache size. The number of local KV pairs equals num_window_length - num_heavy_hitter_tokens. Also, use --enable_position_rolling to enable position rolling in the KV cache size that assign the positions in the KV cache instead of the ones in original sequences. Enabling postional rolling is important when sequence length exceeds the pretrained context windows, e.g., 4K in Llama-2.

python generation.py
--input-path data/summarization/xsum.jsonl \ # 5-shot inference on 1000 samples from XSUM dataset. Other option: cnn_dailymail.jsonl.
--output-path summarization_output/xsum_h2o.jsonl \ # output path for generated sequence
--model-name meta-llama/Llama-2-7b-hf \ # models
--enable_h2o_generation 

dsf