Danielbrdz/Barcenas-Llama3-8b-ORPO
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 29, 2024License:llama3Architecture:Transformer0.0K Warm

Danielbrdz/Barcenas-Llama3-8b-ORPO is an 8 billion parameter language model based on Llama 3, specifically fine-tuned from VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct. This model utilizes the novel ORPO training method and was trained on the reciperesearch/dolphin-sft-v0.1-preference dataset, which incorporates GPT-4 improved conversational data. It is optimized for enhanced conversational capabilities, making it suitable for dialogue-focused applications.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p