jasong03/qwen3-1.7b-bilingual-amr-sft-v2
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 20, 2026Architecture:Transformer Warm

The jasong03/qwen3-1.7b-bilingual-amr-sft-v2 model is a 2 billion parameter language model, fine-tuned from Qwen/Qwen3-1.7B using Supervised Fine-Tuning (SFT) with the TRL framework. This model is designed for bilingual applications, focusing on Abstract Meaning Representation (AMR) tasks. It leverages a 32768 token context length, making it suitable for processing longer sequences in its specialized domain.

Loading preview...