Thiraput01/PeaceKeeper-4B-V3

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 15, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

PeaceKeeper-4B-V3 by Thiraput01 is a 4 billion parameter instruction-tuned causal language model, fine-tuned from unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit. This model leverages Unsloth and Huggingface's TRL library for 2x faster training, offering a context length of 32768 tokens. It is designed for general-purpose language tasks, benefiting from efficient training methodologies.

Loading preview...

Overview

PeaceKeeper-4B-V3 is a 4 billion parameter instruction-tuned language model developed by Thiraput01. It is fine-tuned from the unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit base model, utilizing the Unsloth library and Huggingface's TRL for efficient training. This model stands out for its optimized training process, achieving 2x faster fine-tuning.

Key Capabilities

  • Efficient Training: Benefits from Unsloth for significantly faster fine-tuning.
  • Instruction Following: Designed to respond effectively to instructions, making it suitable for various NLP tasks.
  • Extended Context: Supports a context length of 32768 tokens, allowing for processing longer inputs.

Good for

  • General-purpose NLP applications: Ideal for tasks requiring instruction adherence and understanding.
  • Developers seeking efficient models: Its optimized training makes it a good choice for projects where rapid iteration and deployment are important.
  • Applications requiring moderate context: The 32768 token context length supports a range of use cases from summarization to conversational AI.