Pritam357/styl-qwen2.5-3b-indian-fashion-merged

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Pritam357/styl-qwen2.5-3b-indian-fashion-merged is a 3.1 billion parameter Qwen2.5 model, fine-tuned by Pritam357. This model was optimized for faster training using Unsloth and Huggingface's TRL library. It is specifically designed for applications related to Indian fashion, leveraging its Qwen2.5 architecture and 32768 token context length.

Loading preview...

Model Overview

Pritam357/styl-qwen2.5-3b-indian-fashion-merged is a 3.1 billion parameter language model developed by Pritam357. It is based on the Qwen2.5 architecture and was fine-tuned from the unsloth/Qwen2.5-3B-Instruct-bnb-4bit model. The fine-tuning process utilized Unsloth and Huggingface's TRL library, which enabled a 2x faster training speed.

Key Characteristics

  • Architecture: Qwen2.5-3B-Instruct base model.
  • Parameter Count: 3.1 billion parameters.
  • Context Length: Supports a context length of 32768 tokens.
  • Training Optimization: Fine-tuned with Unsloth for accelerated training.

Intended Use Cases

This model is specifically tailored for tasks and applications within the domain of Indian fashion. Its fine-tuning suggests suitability for generating text, answering questions, or assisting with content creation related to Indian fashion trends, styles, terminology, and cultural aspects.