Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.02
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer Cold
Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.02 is an 8 billion parameter instruction-tuned Llama-3.1 model developed by Neelectric. Fine-tuned on the OpenR1-Math-220k dataset, this model specializes in mathematical reasoning and problem-solving. It leverages a 32768 token context length, making it suitable for complex mathematical tasks and detailed instructional interactions.
Loading preview...