Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.05
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.05 is an 8 billion parameter instruction-tuned language model developed by Neelectric, fine-tuned from Meta's Llama-3.1-8B-Instruct. This model is specifically optimized for mathematical reasoning tasks, leveraging the OpenR1-Math-220k_all_SDFT_nr dataset. It utilizes Self-Training with On-Policy Self-Distillation (SDFT) for enhanced alignment, making it suitable for complex problem-solving and numerical applications.

Loading preview...