Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.05

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:May 6, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.05 is an 8 billion parameter instruction-tuned causal language model, fine-tuned from Meta's Llama-3.1-8B-Instruct. This model specializes in mathematical reasoning and problem-solving, having been trained on the Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset. It offers a 32768 token context length and is optimized for tasks requiring robust mathematical capabilities.

Loading preview...

Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.05 Overview

This model is a specialized instruction-tuned variant of Meta's Llama-3.1-8B-Instruct, developed by Neelectric. It features 8 billion parameters and supports a substantial context length of 32768 tokens.

Key Capabilities

  • Enhanced Mathematical Reasoning: The model has undergone supervised fine-tuning (SFT) on the Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset, specifically designed to improve its performance on mathematical tasks.
  • Instruction Following: Inherits strong instruction-following capabilities from its base Llama-3.1-8B-Instruct model.
  • Training Framework: Trained using the TRL library, indicating a focus on efficient fine-tuning methodologies.

Good For

  • Mathematical Problem Solving: Ideal for applications requiring accurate numerical computations, logical reasoning in mathematical contexts, and solving complex math problems.
  • Educational Tools: Can be integrated into platforms for tutoring, generating math exercises, or explaining mathematical concepts.
  • Research and Development: Useful for researchers exploring the impact of specialized mathematical datasets on large language models.