princeton-nlp/Llama-3-Base-8B-SFT-ORPO
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Oct 5, 2024Architecture:Transformer Warm
The princeton-nlp/Llama-3-Base-8B-SFT-ORPO is an 8 billion parameter language model based on the Llama 3 architecture, developed by princeton-nlp. This model is fine-tuned using the ORPO (Odds Ratio Preference Optimization) method, as detailed in the SimPO research. It is designed for preference optimization tasks, offering a reference-free reward approach for improved alignment.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–