Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.04
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Cold
Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.04 is an 8 billion parameter instruction-tuned language model, fine-tuned from Meta's Llama-3.1-8B-Instruct. This model leverages a 32768 token context length and was trained using SFT with the TRL framework. It is designed for general instruction following, building upon the strong base capabilities of the Llama 3.1 series.
Loading preview...