ReviewHub/qwen3-4b-it-2507-sft-2018-2024
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 27, 2026Architecture:Transformer Warm

ReviewHub/qwen3-4b-it-2507-sft-2018-2024 is a 4 billion parameter instruction-tuned language model with a 32768 token context length. This model is automatically generated and pushed to the Hugging Face Hub. Its specific architecture, training data, and primary differentiators are not detailed in the provided model card, indicating a need for further information to determine its specialized use cases.

Loading preview...