chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:May 26, 2024License:llama3Architecture:Transformer0.0K Warm

The chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO is an 8 billion parameter Llama-3-Instruct-based model, developed by chujiezheng, that utilizes an extrapolation (ExPO) technique to enhance alignment with human preferences. This model builds upon Llama-3-Instruct-8B-SimPO and Meta-Llama-3-8B-Instruct, achieving superior performance on benchmarks like AlpacaEval 2.0. It is specifically optimized for improved win rates in conversational AI tasks through its unique extrapolation method.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p