kmseong/Llama-3.2-3B-instruct-SafeLoRA

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The kmseong/Llama-3.2-3B-instruct-SafeLoRA model is a 3 billion parameter instruction-tuned language model developed by kmseong, based on Meta Llama-3.2-3B-Instruct. This model was fine-tuned using the Safe LoRA method, with the adapter weights merged into the base model to provide a directly loadable full model. It is designed for general instruction-following tasks, offering a safety-enhanced alternative through its fine-tuning approach.

Loading preview...

Model Overview

The kmseong/Llama-3.2-3B-instruct-SafeLoRA is a 3 billion parameter instruction-tuned language model. It is built upon the meta-llama/Llama-3.2-3B-Instruct base model and has been fine-tuned using the Safe LoRA method. This process involved merging the Safe LoRA adapter weights directly into the base model, resulting in a standalone, fully loadable model.

Key Characteristics

  • Base Architecture: Llama-3.2-3B-Instruct from Meta.
  • Fine-tuning: Utilizes the Safe LoRA method, which aims to enhance safety aspects during fine-tuning.
  • Deployment: Provided as a merged full model, eliminating the need for separate adapter loading.
  • License: Licensed under Apache 2.0, inheriting from its base model.

Usage and Purpose

This model is suitable for various instruction-following applications, leveraging the capabilities of the Llama-3.2-3B-Instruct architecture with an added layer of safety considerations from its fine-tuning. Developers can directly load and use this model for text generation tasks, as demonstrated in the provided Python example. Its primary differentiator lies in the application of Safe LoRA, suggesting an emphasis on responsible AI development.