kmseong/Llama-3.2-3B-instruct-SafeLoRA-1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

kmseong/Llama-3.2-3B-instruct-SafeLoRA-1 is a 3.2 billion parameter instruction-tuned causal language model based on the meta-llama/Llama-3.2-3B-Instruct architecture. This model has been fine-tuned using the Safe LoRA method, resulting in a merged full model that is directly loadable. It is designed for general instruction-following tasks, leveraging its 32768 token context length for processing longer inputs.

Loading preview...

Model Overview

This model, kmseong/Llama-3.2-3B-instruct-SafeLoRA-1, is a 3.2 billion parameter instruction-tuned language model built upon the meta-llama/Llama-3.2-3B-Instruct base. It has been fine-tuned using the Safe LoRA method, and the adapter weights were merged with the base model to create a directly loadable full model. This approach ensures that the model can be used out-of-the-box without needing to manage separate adapter weights.

Key Characteristics

  • Base Architecture: Derived from the Llama-3.2-3B-Instruct series by Meta.
  • Fine-tuning: Utilizes the Safe LoRA technique for parameter-efficient fine-tuning.
  • Directly Loadable: The repository contains the merged full model weights, simplifying deployment.
  • Context Length: Supports a context length of 32768 tokens, suitable for handling extensive prompts and generating longer responses.

Usage and Licensing

Developers can easily integrate this model into their applications using the Hugging Face transformers library, as demonstrated in the provided usage example. The model is licensed under the Apache 2.0 License, inheriting its terms from the base meta-llama/Llama-3.2-3B-Instruct model. The use of Safe LoRA aims to enhance the model's safety characteristics during fine-tuning.