Neelectric/Llama-3.2-1B-Instruct_SFT_sciencev00.01
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 20, 2026Architecture:Transformer0.0K Warm

Neelectric/Llama-3.2-1B-Instruct_SFT_sciencev00.01 is a 1 billion parameter instruction-tuned causal language model, fine-tuned by Neelectric from the Meta Llama-3.2-1B-Instruct base model. This model specializes in scientific domain tasks, having been fine-tuned on a dedicated scientific dataset. With a 32768 token context length, it is optimized for processing and generating content related to scientific inquiries and discussions.

Loading preview...