Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.07
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026Architecture:Transformer Cold
Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.07 is an 8 billion parameter instruction-tuned language model developed by Neelectric, fine-tuned from Meta's Llama-3.1-8B-Instruct. It utilizes the SDFT (Self-Training with On-Policy Self-Distillation) method and is specifically optimized for mathematical reasoning tasks. With a 32768 token context length, this model is designed for applications requiring robust mathematical problem-solving capabilities.
Loading preview...