Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jul 23, 2024License:llama3.1Architecture:Transformer0.0K Cold

Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1 is an 8 billion parameter instruction-tuned language model developed by Magpie-Align, based on Meta's Llama-3.1-8B architecture with a 32768 token context length. This model is fine-tuned using the Magpie self-synthesis method, which generates high-quality instruction data by prompting aligned LLMs. It achieves performance comparable to the official Llama-3.1-8B-Instruct model through supervised fine-tuning (SFT) alone, excelling in alignment benchmarks like AlpacaEval and Arena Hard.

Loading preview...