Thiraput01/PeaceKeeper-4B-V4
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Thiraput01/PeaceKeeper-4B-V4 is a 4 billion parameter Qwen3-based causal language model developed by Thiraput01. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its efficient training methodology.
Loading preview...
Overview
Thiraput01/PeaceKeeper-4B-V4 is a 4 billion parameter instruction-tuned language model based on the Qwen3 architecture. Developed by Thiraput01, this model was fine-tuned using a combination of Unsloth and Huggingface's TRL library. A key highlight of its development is the significant training efficiency achieved, reportedly being trained 2x faster than conventional methods.
Key Capabilities
- Instruction Following: Designed to respond effectively to a wide range of user instructions.
- Efficient Training: Benefits from the Unsloth framework, which optimizes the fine-tuning process for speed.
- Qwen3 Architecture: Leverages the robust capabilities of the Qwen3 base model.
Good For
- Applications requiring a compact yet capable instruction-tuned model.
- Scenarios where efficient deployment and inference of a 4B parameter model are crucial.
- Developers interested in models fine-tuned with Unsloth for performance benefits.