Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.19
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 10, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.19 is an 8 billion parameter instruction-tuned causal language model, fine-tuned from Meta's Llama-3.1-8B-Instruct. This model specializes in scientific domains, having been trained using Supervised Fine-Tuning (SFT) on the Neelectric/Replay_0.03.MoT_science.wildguardmix.Llama3_4096toks dataset. It is designed for general text generation tasks with a focus on scientific context, supporting a 32768 token context length.

Loading preview...