Magpie-Align/Llama-3-8B-Magpie-Air-SFT-300K-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jan 24, 2025License:llama3Architecture:Transformer0.0K Warm
Magpie-Align/Llama-3-8B-Magpie-Air-SFT-300K-v0.1 is an 8 billion parameter Llama 3 base model fine-tuned by Magpie-Align using the Magpie-Air-300K-Filtered dataset, which was synthesized from Llama-3-Instruct. This model achieves performance comparable to the official Llama-3-8B-Instruct model with supervised fine-tuning (SFT) alone, demonstrating strong capabilities on alignment benchmarks like AlpacaEval and Arena Hard. It is optimized for instruction following and general conversational tasks, leveraging a self-synthesis method for high-quality alignment data.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–