Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.01
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.01 is an 8 billion parameter instruction-tuned model, fine-tuned by Neelectric from the Llama-3.1-8B-Instruct architecture. This model specializes in mathematical reasoning and problem-solving, having been trained on the Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset. With a context length of 32768 tokens, it is optimized for tasks requiring robust mathematical understanding and generation.

Loading preview...