Thiraput01/PeaceKeeper-4B
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 14, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Thiraput01/PeaceKeeper-4B is a 4 billion parameter instruction-tuned Qwen3 model developed by Thiraput01. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is designed for general instruction-following tasks, leveraging its efficient training methodology.
Loading preview...
Thiraput01/PeaceKeeper-4B: An Efficiently Trained Qwen3 Model
Thiraput01/PeaceKeeper-4B is a 4 billion parameter language model developed by Thiraput01. It is based on the Qwen3 architecture and has been instruction-tuned to follow a wide range of commands.
Key Capabilities
- Instruction Following: Designed to accurately interpret and execute user instructions.
- Efficient Training: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x speedup in the training process.
- Qwen3 Architecture: Benefits from the robust capabilities of the Qwen3 base model.
Good For
- General-purpose AI applications: Suitable for various tasks requiring instruction adherence.
- Developers seeking efficient models: Its optimized training process makes it a good choice for those prioritizing speed and resource efficiency in fine-tuning.