idopinto/qwen3-14b-full-nt-gen-inv-sft-v2-g2-e3
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026Architecture:Transformer Cold

The idopinto/qwen3-14b-full-nt-gen-inv-sft-v2-g2-e3 model is a 14 billion parameter language model fine-tuned from Qwen/Qwen3-14B using the TRL framework. This model is optimized for text generation tasks, leveraging its 32K token context length for comprehensive understanding and response generation. It is specifically fine-tuned for general conversational and generative applications, building upon the robust Qwen3 architecture.

Loading preview...