icefog72/WestIceLemonTeaRP-32k-7b
icefog72/WestIceLemonTeaRP-32k-7b is a 7 billion parameter language model created by icefog72 through a SLERP merge of several pre-trained models, including IceLemonTeaRP-32k-7b and WestWizardIceLemonTeaRP. This model is designed for general language tasks, demonstrating competitive performance across various benchmarks. It is suitable for applications requiring a balanced mix of reasoning, common sense, and language understanding.
Loading preview...
Model Overview
The icefog72/WestIceLemonTeaRP-32k-7b is a 7 billion parameter language model developed by icefog72. It was created using the SLERP merge method via mergekit, combining several base models to achieve its capabilities. The merge process specifically integrated IceLemonTeaRP-32k-7b and WestWizardIceLemonTeaRP, which itself is a merge of SeverusWestLake-7B-DPO and WizardIceLemonTeaRP (comprising Not-WizardLM-2-7B and IceLemonTeaRP-32k-7b).
Key Capabilities
- Merged Architecture: Leverages the strengths of multiple foundational models through a sophisticated SLERP merge.
- Benchmark Performance: Achieves an average score of 71.27 on the Open LLM Leaderboard, with notable scores including:
- AI2 Reasoning Challenge (25-Shot): 68.77
- HellaSwag (10-Shot): 86.89
- MMLU (5-Shot): 64.28
- TruthfulQA (0-shot): 62.47
- Winogrande (5-shot): 80.98
- GSM8k (5-shot): 64.22
- Quantization Support: Includes
measurement.jsonfor EXL2 quantization, with available quantized versions at 4.2bpw, 6.5bpw, and 8bpw.
Good For
- General-purpose language generation and understanding tasks.
- Applications requiring a model with balanced performance across reasoning, common sense, and factual recall.
- Developers looking for a 7B parameter model with a strong foundation from merged architectures.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.