Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.06
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026Architecture:Transformer Cold
Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.06 is an 8 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned version of Meta's Llama-3.1-8B-Instruct, specifically optimized for mathematical reasoning tasks. This model leverages Self-Training with On-Policy Self-Distillation (SDFT) on a specialized math dataset, making it particularly suitable for complex mathematical problem-solving and related applications. Its 32768 token context length supports handling extensive mathematical prompts and solutions.
Loading preview...