Neelectric/Llama-3.1-8B-Instruct_SafeGrad_mathv00.05

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 15, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SafeGrad_mathv00.05 is an 8 billion parameter instruction-tuned model developed by Neelectric, fine-tuned from Meta's Llama-3.1-8B-Instruct. With a 32768 token context length, this model is specifically optimized for mathematical reasoning tasks. It was trained on the Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset, making it particularly suitable for applications requiring strong mathematical problem-solving capabilities.

Loading preview...

Overview

Neelectric/Llama-3.1-8B-Instruct_SafeGrad_mathv00.05 is an 8 billion parameter instruction-tuned language model, fine-tuned by Neelectric from the base meta-llama/Llama-3.1-8B-Instruct model. It leverages a substantial 32768 token context length, enhancing its ability to process and understand longer mathematical problems and contexts.

Key Capabilities

  • Mathematical Reasoning: This model is specifically fine-tuned on the Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset, making it highly proficient in mathematical problem-solving.
  • Instruction Following: As an instruction-tuned variant, it is designed to accurately follow user prompts and generate relevant responses.

Training Details

The model was trained using the TRL (Transformers Reinforcement Learning) framework, specifically employing Supervised Fine-Tuning (SFT). The training utilized TRL version 1.1.0.dev0, Transformers 4.57.6, Pytorch 2.9.0, Datasets 4.8.4, and Tokenizers 0.22.2.

When to Use This Model

This model is ideal for applications requiring robust mathematical capabilities, such as educational tools, scientific research assistants, or any system where accurate numerical and logical reasoning is paramount. Its specialized training makes it a strong candidate for tasks involving complex mathematical equations, word problems, and logical deductions.