quangdung/Qwen2.5-1.5b-leetcode-math-linear
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 5, 2026Architecture:Transformer Warm

The quangdung/Qwen2.5-1.5b-leetcode-math-linear model is a 1.5 billion parameter language model based on the Qwen2.5 architecture, created by quangdung. This model was developed using the Task Arithmetic merge method, combining a base Qwen2.5-1.5B-Instruct model with specialized models for LeetCode and mathematical reasoning. It is optimized for tasks requiring problem-solving skills in areas like competitive programming and mathematics.

Loading preview...

Model Overview

The quangdung/Qwen2.5-1.5b-leetcode-math-linear is a 1.5 billion parameter language model built upon the Qwen2.5 architecture. It was created by quangdung through a sophisticated merging process using mergekit.

Key Capabilities

This model's primary strength lies in its specialized training for problem-solving tasks. It was developed using the Task Arithmetic merge method, which combined:

  • A base Qwen2.5-1.5B-Instruc-base model.
  • A model fine-tuned on a LeetCode dataset (Qwen2.5-1.5B-Instruct_LeetCodeDataset).
  • A model focused on mathematical reasoning (Qwen2.5-1.5B-Thinking-v1.1).

This unique combination aims to enhance its performance in areas requiring logical deduction and algorithmic thinking.

Ideal Use Cases

Given its specialized merge, this model is particularly well-suited for:

  • Competitive Programming: Assisting with or generating solutions for problems similar to those found on platforms like LeetCode.
  • Mathematical Problem Solving: Tackling various mathematical challenges and reasoning tasks.
  • Educational Tools: Potentially used in applications designed to help users learn or practice coding and math problems.