allenai/open-instruct-self-instruct-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 7, 2023Architecture:Transformer Cold

The allenai/open-instruct-self-instruct-7b is a 7 billion parameter LLaMa-based model developed by AllenAI, fine-tuned on the Self-instruct dataset. This model is a diff, requiring recovery with an original LLaMa model. It is designed for general instruction-following tasks, demonstrating performance across various benchmarks including MMLU, GSM, BBH, and Codex-Eval.

Loading preview...