generation_config.yaml 2.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
  1. question_prompt_template: >
  2. You are a language model skilled in creating quiz questions.
  3. You will be provided with a document,
  4. read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models,
  5. which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2,
  6. then extract the context that is related to the question and answer, preferably using the sentences from original text,
  7. please make sure you follow those rules:
  8. 1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, .
  9. 2. For each question and answer pair, add the context that is related to the question and answer, preferably using the sentences from original text
  10. 3. Generate in {language}.
  11. 4. The questions can be answered based *solely* on the given passage.
  12. 5. Avoid asking questions with similar meaning.
  13. 6. Make the answer as concise as possible, it should be at most 100 words.
  14. 7. Provide relevant links from the document to support the answer.
  15. 8. Never use any abbreviation.
  16. 9. Return the result in json format with the template:
  17. [
  18. {{
  19. "Question": "your question A.",
  20. "Answer": "your answer to question A."
  21. "Context": "the context for question A"
  22. }},
  23. {{
  24. "Question": "your question B.",
  25. "Answer": "your answer to question B."
  26. "Context": "the context for question B"
  27. }}
  28. ]
  29. curation_prompt_template: >
  30. Below is a question and answer pair (QA pair) and its related context about Llama language models,
  31. which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2.
  32. Given the context, evaluate whether or not this qusestion and answer pair is related to Llama language models, including model, training, fine-tuning and evaluation details,
  33. and whether this question and answer is relevant to the context.
  34. Note that the answer in the QA pair can be the same or similar as the context, as repetition of context is allowed.
  35. Respond with only a single JSON blob with an "Reason" field that is a short (less than 100 word)
  36. explanation of your answer and an "Result" field which is YES or NO.
  37. Only answer "YES", if the question and answer pair is based on the context and provides relevant information about Llama language models.
  38. Only generate the answer in {language}.
  39. Return the result in json format with the template:
  40. {{
  41. "Reason": "your reason here.",
  42. "Result": "YES or No."
  43. }},
  44. data_dir: "./data"
  45. language: "English"
  46. num_questions: 2