idopinto/qwen3-14b-nt-gen-inv-sft-v2.2-full
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Cold
The idopinto/qwen3-14b-nt-gen-inv-sft-v2.2-full model is a 14 billion parameter language model, fine-tuned from Qwen/Qwen3-14B. Developed by idopinto, this model is specifically trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for general text generation tasks, leveraging its 32768 token context length for comprehensive understanding and response generation. This model is optimized for generating coherent and contextually relevant text based on user prompts.
Loading preview...