Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.02
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 1, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.02 is an 8 billion parameter instruction-tuned language model developed by Neelectric, fine-tuned from Meta's Llama-3.1-8B-Instruct. It was trained using the SDFT (Self-Training with On-Policy Self-Distillation) method on the OpenR1-Math-220k_all_Llama3_4096toks_SDFT dataset, specializing it for mathematical reasoning tasks. With a context length of 32768 tokens, this model is primarily designed to excel in complex mathematical problem-solving and related analytical applications.

Loading preview...