idopinto/qwen3-8b-full-nt-gen-inv-sft-v2-g2-e3
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Cold

The idopinto/qwen3-8b-full-nt-gen-inv-sft-v2-g2-e3 model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B using the TRL framework. This model is specifically optimized for text generation tasks, demonstrating enhanced conversational capabilities through supervised fine-tuning (SFT). It is designed for general-purpose text generation, particularly in interactive or question-answering scenarios, leveraging its 32768 token context length.

Loading preview...