The viamr-project/qwen3-1.7b-amr-20260206-1038 is a 1.7 billion parameter Qwen3-based model developed by viamr-project, specifically fine-tuned for Abstract Meaning Representation (AMR) parsing. This model excels at converting English sentences into their AMR format, demonstrating strong performance with an F1 score of 80.57. It is optimized for tasks requiring semantic understanding and structured linguistic representation within a 40960 token context length.
Loading preview...
Model Overview
The viamr-project/qwen3-1.7b-amr-20260206-1038 is a specialized 1.7 billion parameter language model built on the Qwen3 architecture. Developed by viamr-project, this model is specifically designed and trained for Abstract Meaning Representation (AMR) parsing.
Key Capabilities
- AMR Parsing: The primary function of this model is to convert natural language sentences into their corresponding Abstract Meaning Representation graphs. This involves identifying semantic roles, entities, and relationships within a sentence.
- Reinforcement Learning (RL) Framework: The model was trained using the veRL framework, indicating a focus on optimizing performance for its specific task through reinforcement learning techniques.
- Performance Metrics: Achieves notable benchmark results for AMR parsing:
- F1 Score: 80.57
- Precision: 81.76
- Recall: 79.42
Good For
- Semantic Parsing: Ideal for applications requiring the extraction of structured semantic meaning from text.
- Natural Language Understanding (NLU): Useful in NLU pipelines where a deep, graph-based representation of sentence meaning is beneficial.
- Linguistic Research: Can serve as a tool for researchers working on computational semantics and AMR-related tasks.
This model is a strong candidate for use cases that demand accurate and efficient conversion of English text into Abstract Meaning Representation.