Alelcv27/Llama3.2-3B-Linear-Math-Code

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 19, 2026Architecture:Transformer Cold

Alelcv27/Llama3.2-3B-Linear-Math-Code is a 3.2 billion parameter language model based on the Llama 3.2 architecture, developed by Alelcv27. This model is a merge of specialized base models, specifically optimized for combined mathematical reasoning and code generation tasks. It leverages a linear merge method to integrate capabilities from both a math-focused and a code-focused Llama 3.2 base model, making it suitable for applications requiring proficiency in both domains.

Loading preview...

Model Overview

Alelcv27/Llama3.2-3B-Linear-Math-Code is a 3.2 billion parameter language model built upon the Llama 3.2 architecture. It was created by Alelcv27 using the mergekit tool, specifically employing the Linear merge method.

Key Capabilities

This model is a composite of two specialized base models, designed to excel in a dual focus:

  • Mathematical Reasoning: Incorporates capabilities from Alelcv27/Llama3.2-3B-Base-Math to handle mathematical problems and logic.
  • Code Generation: Integrates strengths from Alelcv27/Llama3.2-3B-Base-Code for tasks involving programming and code generation.

Merge Configuration

The model's unique capabilities stem from its specific merge configuration. The Alelcv27/Llama3.2-3B-Base-Code model contributed 60% of the weight, while Alelcv27/Llama3.2-3B-Base-Math contributed 40% across all 28 layers, indicating a slight emphasis on coding proficiency while maintaining strong mathematical understanding.

Good For

  • Applications requiring a balance of mathematical problem-solving and code generation.
  • Scenarios where a smaller, efficient model (3.2B parameters) is needed for combined math and code tasks.