HedronCreeper/CreeperQwen
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 1, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

CreeperQwen is a 0.8 billion parameter language model developed by HedronCreeper, based on the Qwen architecture. This model is designed for general language tasks, leveraging its compact size for efficient deployment. With a context length of 32768 tokens, it offers substantial capacity for processing longer inputs.

Loading preview...

CreeperQwen Overview

CreeperQwen is a compact 0.8 billion parameter language model developed by HedronCreeper, built upon the foundational Qwen architecture. This model is engineered to provide efficient language processing capabilities, making it suitable for applications where resource constraints are a consideration.

Key Characteristics

  • Base Architecture: Utilizes the robust Qwen model as its foundation.
  • Parameter Count: Features 0.8 billion parameters, balancing performance with computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for the processing of lengthy texts and complex queries.

Use Cases

CreeperQwen is well-suited for a variety of general language tasks, particularly in scenarios where a smaller, more efficient model is preferred without significantly compromising on context understanding. Its design makes it a viable option for applications requiring moderate language generation and comprehension capabilities.