Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-300K-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer0.0K Warm

Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-300K-v0.1 is an 8 billion parameter Llama-3-based language model developed by Magpie-Align, fine-tuned on 300K high-quality instruction instances generated by the Magpie self-synthesis method. This model achieves performance comparable to the official Llama-3-8B-Instruct model through supervised fine-tuning (SFT) alone, demonstrating strong capabilities on alignment benchmarks such as AlpacaEval and ArenaHard. It is particularly effective for tasks requiring robust instruction following and general conversational abilities.

Loading preview...