idopinto/qwen3-4b-full-nt-gen-inv-sft-v2-g3-e3
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Warm

This model is a 4 billion parameter instruction-tuned causal language model, fine-tuned by idopinto from the Qwen3-4B-Instruct-2507 base model. It leverages a 32768 token context length and was trained using the TRL framework. Optimized for general text generation, this model is suitable for various conversational and creative applications.

Loading preview...