Alelcv27/Llama3.2-3B-Dare-Math-Code

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 19, 2026Architecture:Transformer Cold

Alelcv27/Llama3.2-3B-Dare-Math-Code is a 3.2 billion parameter language model based on the Llama 3.2 architecture, developed by Alelcv27. This model was created using a Linear DARE merge method, combining specialized base models for mathematics and coding. It is designed to excel in tasks requiring both mathematical reasoning and code generation, offering a 32768 token context length.

Loading preview...

Model Overview

Alelcv27/Llama3.2-3B-Dare-Math-Code is a 3.2 billion parameter language model built upon the Llama 3.2 architecture. Developed by Alelcv27, this model was engineered using the Linear DARE merge method, which combines the strengths of multiple specialized base models.

Key Capabilities

This model is a merge of two distinct base models:

  • Alelcv27/Llama3.2-3B-Base-Math: Contributes to enhanced mathematical reasoning and problem-solving abilities.
  • Alelcv27/Llama3.2-3B-Base-Code: Provides strong capabilities in code generation and understanding.

The merging process, specifically using a 0.5 weight for both the math and code components, aims to create a balanced model proficient in both domains. It leverages a 32768 token context length, suitable for handling complex mathematical problems and extensive code snippets.

Ideal Use Cases

This model is particularly well-suited for applications requiring a combination of:

  • Mathematical problem-solving: From basic arithmetic to more complex algebraic or logical reasoning tasks.
  • Code generation and analysis: Assisting with writing, debugging, or understanding programming code across various languages.
  • Educational tools: Supporting learning environments that involve both computational and logical thinking.
  • Technical assistance: Providing support in fields that blend mathematical concepts with programming challenges.