Neelectric/Llama-3.2-1B-Instruct_SFT_sciencefisher_v00.06
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 20, 2026Architecture:Transformer0.0K Warm

Neelectric/Llama-3.2-1B-Instruct_SFT_sciencefisher_v00.06 is a 1 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned variant of the Llama-3.2-1B-Instruct architecture, specifically optimized for scientific domain tasks. This model leverages a 32768 token context length and is trained using Supervised Fine-Tuning (SFT) on a specialized scientific dataset, making it suitable for science-related question answering and text generation.

Loading preview...