Neelectric/Llama-3.2-1B-Instruct_SFT_sciencev00.02
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 20, 2026Architecture:Transformer Warm

Neelectric/Llama-3.2-1B-Instruct_SFT_sciencev00.02 is a 1 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned variant of meta-llama/Llama-3.2-1B-Instruct, specifically optimized for scientific domain tasks. Trained on the Neelectric/MoT_science_Llama3_4096toks dataset, this model excels at generating responses relevant to scientific inquiries and discussions, leveraging a 32768 token context length.

Loading preview...