kmseong/Llama-3.2-3B-gsm8k_ft_after-rsn-tuned-freeze_rsn_10
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 17, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The kmseong/Llama-3.2-3B-gsm8k_ft_after-rsn-tuned-freeze_rsn_10 model is a 3.2 billion parameter Llama-3.2-3B-Instruct variant developed by kmseong, fine-tuned using the Safety Neuron Tuning (SN-Tune) method. This approach selectively fine-tunes only safety-critical neurons on safety alignment data, freezing all other parameters. It is designed to enhance safety alignment with minimal impact on general capabilities and offers parameter-efficient fine-tuning, making it suitable for applications requiring improved safety. The model supports a context length of 32768 tokens.

Loading preview...