idopinto/qwen3-8b-full-nt-gen-inv-sft-v2-g3-e3
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026Architecture:Transformer Cold
The idopinto/qwen3-8b-full-nt-gen-inv-sft-v2-g3-e3 model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B using Supervised Fine-Tuning (SFT) with TRL. This model is designed for general text generation tasks, leveraging its 32K context length for comprehensive understanding and response generation. It specializes in producing coherent and contextually relevant text based on user prompts, making it suitable for a wide range of conversational and creative applications.
Loading preview...