Overview
Model Overview
The gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES is an 8 billion parameter instruction-tuned language model built upon the Meta-Llama-3.1 architecture, featuring a 32,768 token context length. This model was created using the TIES (Trimming and Merging of Fine-tuned Models) merge method, combining two distinct instruction-tuned variants of the Meta-Llama-3.1-8B base model.
Key Capabilities
- Instruction Following: Designed to excel in general instruction-following tasks, inheriting capabilities from its instruction-tuned parent models.
- Merged Intelligence: Benefits from the combined knowledge and fine-tuning of
mlabonne/Meta-Llama-3.1-8B-Instruct-abliteratedandmeta-llama/Meta-Llama-3.1-8B-Instruct. - Efficient Merging: Utilizes the TIES method, which is known for effectively merging multiple models while preserving their individual strengths.
Performance Insights
Evaluations on the Open LLM Leaderboard indicate a balanced performance across various benchmarks. Notable scores include:
- IFEval (0-Shot): 45.51
- BBH (3-Shot): 28.91
- MMLU-PRO (5-shot): 29.76
Good For
- Developers seeking a robust 8B instruction-tuned model based on the Llama 3.1 series.
- Applications requiring general-purpose instruction following and conversational AI.
- Experimentation with merged models to achieve a blend of capabilities from different fine-tunes.