vnixxa31/qwen3-1.7b-zeta-sft
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Loading
The vnixxa31/qwen3-1.7b-zeta-sft is a 2 billion parameter Qwen3-based causal language model developed by vnixxa31, fine-tuned from unsloth/Qwen3-1.7B-Base. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its efficient fine-tuning process.
Loading preview...