Neelectric/Llama-3.1-8B-Instruct_SFT_Math-220kv00.28

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 8, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_Math-220kv00.28 is an 8 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned version of Meta's Llama-3.1-8B-Instruct, specifically optimized for mathematical reasoning tasks. This model leverages the OpenR1-Math-220k_extended_Llama3_4096toks dataset, making it particularly adept at handling complex mathematical problems within its 32768 token context window.

Loading preview...

Model Overview

Neelectric/Llama-3.1-8B-Instruct_SFT_Math-220kv00.28 is an 8 billion parameter instruction-tuned language model, building upon the robust architecture of Meta's Llama-3.1-8B-Instruct. This model has undergone supervised fine-tuning (SFT) using the TRL framework, specifically targeting enhanced performance in mathematical reasoning.

Key Capabilities

  • Mathematical Reasoning: The model is fine-tuned on the Neelectric/OpenR1-Math-220k_extended_Llama3_4096toks dataset, indicating a strong specialization in solving and understanding mathematical problems.
  • Instruction Following: As an instruction-tuned model, it is designed to accurately follow user prompts and generate relevant responses.
  • Context Handling: It supports a substantial context length of 32768 tokens, allowing for processing and generating longer, more complex mathematical queries or discussions.

Training Details

The model was trained using the TRL (Transformer Reinforcement Learning) library, a framework developed by Hugging Face for fine-tuning large language models. The specific dataset used, OpenR1-Math-220k_extended_Llama3_4096toks, suggests a focus on a broad range of mathematical problems and concepts.

Ideal Use Cases

This model is particularly well-suited for applications requiring:

  • Mathematical Problem Solving: Generating solutions or explanations for mathematical questions.
  • Educational Tools: Assisting in tutoring or creating interactive learning experiences for math.
  • Research in Mathematical AI: Exploring the capabilities of LLMs in complex numerical and logical reasoning.