ferrazzipietro/unsup-Qwen3-1.7B-datav3
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 18, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
ferrazzipietro/unsup-Qwen3-1.7B-datav3 is a 2 billion parameter language model fine-tuned by ferrazzipietro, based on the Qwen3-1.7B architecture. This model was trained for one epoch with a context length of 32768 tokens, achieving a final validation loss of 0.2568. Its specific capabilities and intended uses are not detailed, as it was fine-tuned on an undisclosed dataset.
Loading preview...