charcoalfilter/textpulse-v4-qwen3-4b

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The charcoalfilter/textpulse-v4-qwen3-4b is a 4 billion parameter Qwen3-based causal language model developed by charcoalfilter. Finetuned using Unsloth and Huggingface's TRL library, this model benefits from 2x faster training. It is designed for general language tasks, leveraging the Qwen3 architecture for efficient performance.

Loading preview...

Model Overview

The charcoalfilter/textpulse-v4-qwen3-4b is a 4 billion parameter language model built upon the Qwen3 architecture. Developed by charcoalfilter, this model was finetuned using the Unsloth library in conjunction with Huggingface's TRL library. A key characteristic of its development is the utilization of Unsloth, which enabled a 2x acceleration in its training process.

Key Characteristics

  • Base Architecture: Qwen3
  • Parameter Count: 4 billion
  • Training Efficiency: Finetuned with Unsloth for 2x faster training.
  • License: Apache-2.0

Intended Use Cases

This model is suitable for a range of general natural language processing tasks where a 4B parameter model provides a balance between performance and computational efficiency. Its finetuning process suggests an optimization for specific applications, though the README does not detail specific benchmarks or primary differentiators beyond the training methodology.