StudentLLM/Alpagasus-2-13b-QLoRA-merged
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 2, 2023License:otherArchitecture:Transformer0.0K Warm

StudentLLM/Alpagasus-2-13b-QLoRA-merged is a 13 billion parameter auto-regressive language model developed by Yunsang Yoo and Hyunwoo Ko. It is an unofficial QLoRA fine-tune of Meta's Llama-2-13b-hf, based on the AlpaGasus methodology for efficient training with less data. This model is instruction-tuned using a GPT-3.5-turbo filtered dataset and is designed for general English language tasks, achieving an average score of 59.34 on the OpenLLM Leaderboard benchmarks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p