Aravindaa607/Last_mixed_to_tamil_model_merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Feb 22, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Aravindaa607/Last_mixed_to_tamil_model_merged is a 3.2 billion parameter Llama-based instruction-tuned language model developed by Aravindaa607. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is specifically optimized for tasks requiring a Llama-3.2-3B-Instruct base, with a context length of 32768 tokens.

Loading preview...

Overview

Aravindaa607/Last_mixed_to_tamil_model_merged is a 3.2 billion parameter instruction-tuned language model. Developed by Aravindaa607, this model is based on the Llama architecture, specifically fine-tuned from unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit.

Key Characteristics

  • Architecture: Llama-based, fine-tuned from a Llama-3.2-3B-Instruct variant.
  • Parameter Count: 3.2 billion parameters.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Training Methodology: The model was fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.

Use Cases

This model is suitable for applications that benefit from a compact yet capable Llama-based instruction-tuned model. Its efficient training process suggests potential for rapid iteration and deployment in scenarios where a 3.2 billion parameter model with a large context window is appropriate.