vericava/qwen3-0.6b-vericava-posts-v4
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jun 8, 2025License:apache-2.0Architecture:Transformer Open Weights Warm
The vericava/qwen3-0.6b-vericava-posts-v4 is a 0.8 billion parameter language model, fine-tuned from the Qwen/Qwen3-0.6B architecture. This model was trained with a learning rate of 0.0002 over 100 epochs, utilizing a total batch size of 256. Its specific primary differentiator and intended use cases are not detailed in the available information, suggesting it may be a general-purpose fine-tune or for an unspecified niche application.
Loading preview...