Neelectric/Llama-3.2-1B-Instruct_SFT_sciencefisher_v00.05
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 20, 2026Architecture:Transformer Warm

Neelectric/Llama-3.2-1B-Instruct_SFT_sciencefisher_v00.05 is a 1 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct, specifically trained on the Neelectric/MoT_science_Llama3_2048toks dataset. This model is optimized for scientific domain tasks, leveraging its 32768 token context length for processing extensive scientific information.

Loading preview...