suayptalha/Lamarckvergence-14B

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Feb 6, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

suayptalha/Lamarckvergence-14B is a 14.8 billion parameter merged language model, created by suayptalha using the SLERP method from sometimesanotion/Lamarck-14B-v0.7 and sometimesanotion/Qwenvergence-14B-v12-Prose-DS. This model is optimized for general language tasks, achieving a 43.32 average score on the Open LLM Leaderboard and notable performance in IFEval and MATH Lvl 5. It features a substantial 131072 token context length, making it suitable for applications requiring extensive contextual understanding.

Loading preview...

Model Overview

suayptalha/Lamarckvergence-14B is a 14.8 billion parameter language model, developed by suayptalha through the merging of pre-trained models. It was created using the SLERP merge method from two base models: sometimesanotion/Lamarck-14B-v0.7 and sometimesanotion/Qwenvergence-14B-v12-Prose-DS. This model is currently ranked #1 among models up to 15B parameters on the Open LLM Leaderboard.

Key Capabilities & Performance

This model demonstrates strong performance across various benchmarks, as evaluated on the Open LLM Leaderboard. Key results include:

  • Avg. Score: 43.32
  • IFEval (0-Shot): 76.56
  • BBH (3-Shot): 50.33
  • MATH Lvl 5 (4-Shot): 54.00
  • MMLU-PRO (5-shot): 47.59

With a context length of 131072 tokens, Lamarckvergence-14B is well-suited for tasks requiring deep contextual understanding and processing of long inputs.

Use Cases

Given its balanced performance across multiple benchmarks, this model is suitable for a range of general-purpose language generation and understanding tasks, particularly where strong reasoning and instruction following (IFEval) capabilities are beneficial.