jasong03/qwen3-1.7b-bilingual-amr-sft-v3
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 20, 2026Architecture:Transformer Warm

The jasong03/qwen3-1.7b-bilingual-amr-sft-v3 model is a 1.7 billion parameter language model, fine-tuned from Qwen/Qwen3-1.7B. This model has been specifically trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for general text generation tasks, leveraging its bilingual capabilities and a 32K context length. Its primary strength lies in generating coherent and contextually relevant responses based on user prompts.

Loading preview...