umd-zhou-lab/claude2-alpaca-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Sep 18, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

The umd-zhou-lab/claude2-alpaca-7B is a 7 billion parameter auto-regressive language model developed by UMD Tianyi Zhou Lab. It is fine-tuned from Llama-2-7b using instruction-tuning data distilled from Claude 2, designed for research in large language models and chatbots. This model demonstrates improved average performance compared to Llama-2-7b-chat, particularly in benchmarks like ARC and HellaSwag, while maintaining a 4096-token context length.

Loading preview...