WorldOpenTechnology/Araptor-1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 1, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

WorldOpenTechnology/Araptor-1 is a 4 billion parameter Qwen3-based instruction-tuned causal language model developed by WorldOpenTechnology. This model was finetuned using Unsloth and Huggingface's TRL library, emphasizing efficient training. It is designed for general instruction-following tasks, leveraging its Qwen3 architecture for robust performance.

Loading preview...

WorldOpenTechnology/Araptor-1 Overview

Araptor-1 is a 4 billion parameter instruction-tuned model developed by WorldOpenTechnology. It is based on the Qwen3 architecture and was finetuned from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit.

Key Characteristics

  • Architecture: Qwen3-based, providing a strong foundation for language understanding and generation.
  • Efficient Training: The model was trained using Unsloth and Huggingface's TRL library, which enabled a 2x faster finetuning process.
  • Parameter Count: With 4 billion parameters, it offers a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 40960 tokens, allowing for processing longer inputs and maintaining coherence over extended conversations or documents.

Use Cases

Araptor-1 is suitable for a variety of instruction-following applications, benefiting from its efficient training and robust base model. Its large context window makes it particularly useful for tasks requiring extensive contextual understanding.