umd-zhou-lab/recycled-alpaca-7b-v2.0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 22, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

The umd-zhou-lab/recycled-alpaca-7b-v2.0 is a 7 billion parameter auto-regressive language model developed by UMD Tianyi Zhou Lab. Fine-tuned from Llama-2-7b, this model utilizes a novel "recycled Alpaca data V2" approach to significantly enhance performance on instruction-following tasks. It demonstrates improved capabilities across benchmarks like AlpacaEval, ARC, HellaSwag, MMLU, and TruthfulQA, making it suitable for research in large language models and chatbot development.

Loading preview...