Model Overview
This model, grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge, is an 8 billion parameter instruction-tuned language model based on the Meta Llama 3 architecture. It was created by grimjim using the mergekit tool, specifically employing the SLERP merge method.
Merge Details
The model is a strategic merge of two distinct Llama 3-Instruct 8B variants:
- princeton-nlp/Llama-3-Instruct-8B-SimPO
- UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
The merge configuration involved specific weighting for self-attention and MLP layers across the 32 layers of the base models, aiming to combine their respective strengths. This approach allows for a potentially more robust or specialized model than either of its constituents alone.
Performance Highlights
Evaluations on the Open LLM Leaderboard indicate an average score of 20.74. Specific benchmark results include:
- IFEval (0-Shot): 42.71 strict accuracy
- BBH (3-Shot): 28.26 normalized accuracy
- MMLU-PRO (5-shot): 29.17 accuracy
Use Cases
This merged model is suitable for a variety of text generation tasks where a Llama 3-based instruction-following model is desired. Its merged nature suggests potential for balanced performance across different instruction types, making it a versatile choice for general-purpose conversational AI and instruction-based applications.