seopbo/qwen3-1.7b-sft-by-tulu3-subsets
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 23, 2026Architecture:Transformer Warm

The seopbo/qwen3-1.7b-sft-by-tulu3-subsets model is a 1.7 billion parameter language model, fine-tuned from Qwen/Qwen3-1.7B-Base using the TRL framework. This model is specifically optimized for instruction following, leveraging Supervised Fine-Tuning (SFT) on subsets of the Tulu 3 dataset. Its primary use case is generating coherent and contextually relevant text based on user prompts, making it suitable for various conversational and text generation tasks.

Loading preview...