Neelectric/Llama-3.1-8B-Instruct_SafeGrad_mathv00.08

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 22, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SafeGrad_mathv00.08 is an 8 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned version of Meta's Llama-3.1-8B-Instruct, specifically optimized for mathematical reasoning tasks. This model leverages a 32768 token context length and was trained on the OpenR1-Math-220k dataset, making it suitable for applications requiring strong mathematical problem-solving capabilities.

Loading preview...

Model Overview

Neelectric/Llama-3.1-8B-Instruct_SafeGrad_mathv00.08 is an 8 billion parameter instruction-tuned model, fine-tuned from Meta's Llama-3.1-8B-Instruct. Its primary specialization is in mathematical reasoning, achieved through training on the extensive Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset. The model supports a substantial context length of 32768 tokens, enhancing its ability to process complex mathematical problems.

Key Capabilities

  • Mathematical Reasoning: Specifically fine-tuned to excel in solving mathematical problems and understanding related concepts.
  • Instruction Following: Inherits strong instruction-following capabilities from its Llama-3.1-8B-Instruct base.
  • Extended Context: Benefits from a 32768 token context window, allowing for more detailed problem descriptions and solutions.

Training Details

The model was trained using Supervised Fine-Tuning (SFT) with the TRL library. This process involved adapting the base Llama-3.1-8B-Instruct model to the specialized mathematical dataset, enhancing its performance in this domain. Training progress and metrics are available for review on Weights & Biases.