suayptalha/Lix-14B-v0.1

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 6, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

suayptalha/Lix-14B-v0.1 is a 14.8 billion parameter merged language model, created using the Model Stock method with wanlige/li-14b-v0.4 as its base. This model integrates sthenno-com/miscii-14b-0218 and wanlige/li-14b-v0.4-slerp0.1, achieving a notable rank on the Open LLM Leaderboard for models up to 15B parameters. It demonstrates strong performance across various benchmarks, including IFEval, BBH, and MATH Lvl 5, making it suitable for general-purpose language tasks requiring robust reasoning and instruction following.

Loading preview...

Model Overview

suayptalha/Lix-14B-v0.1 is a 14.8 billion parameter language model, developed by suayptalha. It was created by merging pre-trained language models using the Model Stock merge method, with wanlige/li-14b-v0.4 serving as the base model. The merge incorporated sthenno-com/miscii-14b-0218 and wanlige/li-14b-v0.4-slerp0.1.

Key Capabilities & Performance

This model is highly ranked on the Open LLM Leaderboard, currently holding the #3 position among models up to 15B parameters. Its performance highlights include:

  • Average Score: 43.32
  • IFEval (0-Shot): 78.13
  • BBH (3-Shot): 51.47
  • MATH Lvl 5 (4-Shot): 52.95
  • MMLU-PRO (5-shot): 47.94

These scores indicate strong capabilities in instruction following, multi-step reasoning, and mathematical problem-solving.

Use Cases

Given its balanced performance across various benchmarks, suayptalha/Lix-14B-v0.1 is well-suited for:

  • General-purpose text generation and understanding.
  • Applications requiring robust instruction following.
  • Tasks involving reasoning and problem-solving, particularly in mathematical contexts.

Detailed evaluation results are available here.