BedRockC/Pivot-Expert-Qwen-3B-Merged

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

BedRockC/Pivot-Expert-Qwen-3B-Merged is a 3.1 billion parameter Qwen2-based causal language model developed by BedRockC. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Qwen2 architecture for robust performance.

Loading preview...

Model Overview

BedRockC/Pivot-Expert-Qwen-3B-Merged is a 3.1 billion parameter language model fine-tuned by BedRockC. It is based on the Qwen2 architecture, specifically fine-tuned from unsloth/Qwen2.5-3B-Instruct-bnb-4bit.

Key Characteristics

  • Architecture: Qwen2-based, leveraging the Qwen2.5-3B-Instruct foundation.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Parameter Count: 3.1 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context length of 32768 tokens, suitable for handling longer inputs and generating more coherent responses.

Intended Use Cases

This model is suitable for a variety of instruction-following tasks, benefiting from its Qwen2 foundation and efficient fine-tuning. Its optimized training process suggests a focus on practical deployment and performance for common language generation and understanding applications.