Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.01
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 1, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.01 is an 8 billion parameter instruction-tuned causal language model developed by Neelectric, fine-tuned from meta-llama/Llama-3.1-8B-Instruct. This model specializes in mathematical reasoning tasks, having been trained on the Neelectric/OpenR1-Math-220k_all_Llama3_2048toks_SDFT dataset. It utilizes the SDFT (Self-Training with On-Policy Self-Distillation) method, making it particularly suitable for applications requiring robust mathematical problem-solving capabilities.

Loading preview...