charcoalfilter/textpulse-v3-qwen3-4b

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The charcoalfilter/textpulse-v3-qwen3-4b is a 4 billion parameter Qwen3-based causal language model developed by charcoalfilter. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its efficient training methodology for practical applications.

Loading preview...

Model Overview

The charcoalfilter/textpulse-v3-qwen3-4b is a 4 billion parameter language model based on the Qwen3 architecture. Developed by charcoalfilter, this model was fine-tuned using a combination of Unsloth and Huggingface's TRL library.

Key Characteristics

  • Architecture: Qwen3-based, a powerful causal language model family.
  • Parameter Count: 4 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Utilizes Unsloth for significantly faster fine-tuning, reportedly achieving 2x speed improvements during its training process.
  • License: Released under the Apache-2.0 license, allowing for broad use and distribution.

Good For

  • General Language Tasks: Suitable for a wide range of natural language processing applications.
  • Efficient Deployment: Its moderate size and efficient training suggest it can be a good candidate for applications where faster iteration or deployment on less powerful hardware is desired.
  • Research and Development: Provides a base for further experimentation and fine-tuning, especially for those interested in Unsloth's training methodologies.