PhantHive/zilya-v1
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
PhantHive/zilya-v1 is a 3.1 billion parameter Qwen2-based causal language model, fine-tuned by PhantHive. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. With a context length of 32768 tokens, it is optimized for efficient performance in various language generation tasks.
Loading preview...
PhantHive/zilya-v1: An Efficient Qwen2-Based Model
PhantHive/zilya-v1 is a 3.1 billion parameter language model developed by PhantHive. It is built upon the Qwen2 architecture and has been fine-tuned for enhanced performance.
Key Capabilities & Features
- Architecture: Based on the robust Qwen2 model family.
- Parameter Count: Features 3.1 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a substantial context window of 32768 tokens, suitable for processing longer inputs and generating coherent extended outputs.
- Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
When to Use This Model
- Resource-Constrained Environments: Its 3.1B parameter size makes it suitable for applications where larger models are impractical.
- General Language Tasks: Effective for a wide range of natural language processing tasks due to its Qwen2 foundation.
- Applications Requiring Longer Context: The 32K context length is beneficial for tasks like summarization of long documents, detailed question answering, or maintaining conversational coherence over extended interactions.