|
@@ -235,6 +235,7 @@ The above tables coule be better summarized by this wonderful visualization from
|
|
|
- [BayLing](https://github.com/ictnlp/BayLing) - an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.
|
|
|
- [UltraLM](https://github.com/thunlp/UltraChat) - Large-scale, Informative, and Diverse Multi-round Chat Models.
|
|
|
- [Guanaco](https://github.com/artidoro/qlora) - QLoRA tuned LLaMA
|
|
|
+ - [ChiMed-GPT](https://github.com/synlp/ChiMed-GPT) - A Chinese medical large language model.
|
|
|
- [BLOOM](https://huggingface.co/bigscience/bloom) - BigScience Large Open-science Open-access Multilingual Language Model [BLOOM-LoRA](https://github.com/linhduongtuan/BLOOM-LORA)
|
|
|
- [BLOOMZ&mT0](https://huggingface.co/bigscience/bloomz) - a family of models capable of following human instructions in dozens of languages zero-shot.
|
|
|
- [Phoenix](https://github.com/FreedomIntelligence/LLMZoo)
|