umd-zhou-lab/recycled-wizardlm-7b-v2.0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 22, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

umd-zhou-lab/recycled-wizardlm-7b-v2.0 is a 7 billion parameter auto-regressive language model developed by UMD Tianyi Zhou Lab, fine-tuned from Llama-2-7b. This model utilizes a novel Reflection-Tuning approach, recycling WizardLM data to enhance instruction-tuning performance. It demonstrates improved benchmark scores on AlpacaEval and Open LLM Leaderboard tasks compared to its base model, making it suitable for research in large language models and chatbots.

Loading preview...