viamr-project/qwen3-1.7b-amr-20260204-1342
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 4, 2026Architecture:Transformer Warm

The viamr-project/qwen3-1.7b-amr-20260204-1342 is a 2 billion parameter Qwen3-based language model developed by viamr-project. This model is specifically fine-tuned using Reinforcement Learning (veRL) for Abstract Meaning Representation (AMR) parsing. It achieves an F1 score of 82.4, with 82.87 precision and 81.93 recall, making it suitable for tasks requiring semantic graph generation from natural language.

Loading preview...

Model Overview

The viamr-project/qwen3-1.7b-amr-20260204-1342 is a specialized language model, approximately 2 billion parameters in size, developed by viamr-project. It is built upon the Qwen3 architecture and has been specifically trained for Abstract Meaning Representation (AMR) parsing.

Key Capabilities

  • Abstract Meaning Representation (AMR) Parsing: The model's primary function is to convert natural language sentences into their corresponding AMR graphs, representing the semantic structure of the text.
  • Reinforcement Learning (veRL) Training: It was trained using a Reinforcement Learning framework, indicating an optimization for specific task performance.
  • Performance Metrics: Achieves an F1 score of 82.4, with a precision of 82.87 and a recall of 81.93 on its benchmark, demonstrating its effectiveness in AMR parsing.

Good For

  • Semantic Analysis: Ideal for applications requiring deep semantic understanding and structured representation of text.
  • Natural Language Understanding (NLU) Tasks: Suitable for research and development in NLU where converting text to a formal meaning representation is crucial.
  • Linguistic Research: Can be used by linguists and computational linguists for analyzing and generating AMR graphs.