umd-zhou-lab/recycled-alpaca-7b-v1.0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

umd-zhou-lab/recycled-alpaca-7b-v1.0 is a 7 billion parameter auto-regressive language model developed by UMD Tianyi Zhou Lab. It is fine-tuned from Llama-2-7b using a novel 'recycled alpaca data V1' dataset, demonstrating significant performance improvements on benchmarks like AlpacaEval and MMLU compared to its base model. This model is primarily intended for research in large language models and chatbots, focusing on the impact of data recycling on instruction-tuning.

Loading preview...