Magpie-Align/Llama-3-8B-Magpie-Align-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 29, 2024License:llama3Architecture:Transformer0.0K Warm

Magpie-Align/Llama-3-8B-Magpie-Align-v0.1 is an 8 billion parameter language model developed by Magpie-Align, based on Meta's Llama-3-8B. This model is an aligned version, fine-tuned using a two-stage pipeline involving Supervised Fine-tuning (SFT) with the Magpie-Pro-MT-300K-v0.1 dataset and Direct Preference Optimization (DPO) with the princeton-nlp/llama3-ultrafeedback dataset. It demonstrates strong performance on alignment benchmarks like Alpaca Eval 2, Arena Hard, and WildBench, often surpassing the official Llama-3-8B-Instruct model.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p