Laoyujie/merged-qwen-slerp

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 22, 2026Architecture:Transformer Cold

Laoyujie/merged-qwen-slerp is a 7.6 billion parameter language model created by Laoyujie through a SLERP merge of two specialized Qwen-based models. This model combines capabilities from a base code model and a math-focused model, aiming to enhance performance in both programming and mathematical reasoning tasks. It leverages the Qwen architecture to provide a versatile solution for applications requiring strong analytical and logical processing.

Loading preview...

Model Overview

Laoyujie/merged-qwen-slerp is a 7.6 billion parameter language model developed by Laoyujie. It was created using the SLERP (Spherical Linear Interpolation) merge method via mergekit, combining two distinct pre-trained Qwen-based models.

Key Capabilities

This model is a fusion of two specialized components:

  • A code-focused base model, suggesting proficiency in programming-related tasks such as code generation, completion, and debugging.
  • A math-focused model, indicating enhanced capabilities in mathematical reasoning, problem-solving, and numerical operations.

By merging these two specialized models, merged-qwen-slerp aims to offer a balanced performance across both domains, making it suitable for applications that require strong analytical and logical processing in technical contexts.

Merge Details

The merge process involved combining a base model derived from a code-specific training dataset with another model trained on mathematical data. The SLERP method was applied with a t parameter of 0.5, indicating an equal weighting between the two merged components. This configuration suggests an intent to integrate the strengths of both source models evenly.