viamr-project/qwen3-1.7b-amr-augmented-20260214-1147
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 14, 2026Architecture:Transformer Warm

The viamr-project/qwen3-1.7b-amr-augmented-20260214-1147 is a 2 billion parameter language model, based on the Qwen3 architecture, specifically fine-tuned for Abstract Meaning Representation (AMR) parsing. Developed by viamr-project, this model leverages a Reinforcement Learning (RL) framework to achieve an F1 score of 82.06 in AMR parsing tasks. With a context length of 32768 tokens, it is designed for converting English sentences into their AMR representations.

Loading preview...

Model Overview

The viamr-project/qwen3-1.7b-amr-augmented-20260214-1147 is a 2 billion parameter model from the Qwen3 family, developed by viamr-project. It is specifically designed and trained for Abstract Meaning Representation (AMR) parsing, a task focused on extracting the semantic meaning of sentences.

Key Capabilities

  • AMR Parsing: The model's primary function is to convert English sentences into their Abstract Meaning Representation. This involves identifying predicates, arguments, and semantic relations within a sentence.
  • Reinforcement Learning (RL) Framework: Training was conducted using a veRL (Reinforcement Learning) framework, indicating an optimization approach tailored for sequence-to-sequence tasks like AMR parsing.
  • Performance: Achieves notable benchmark scores in AMR parsing, with an F1 score of 82.06, precision of 83.07, and recall of 81.07.
  • Context Length: Supports a substantial context length of 32768 tokens, allowing for processing of longer inputs during AMR conversion.

Good For

  • Semantic Parsing Applications: Ideal for research and development in natural language understanding where extracting deep semantic structures is crucial.
  • AMR-specific Tasks: Directly applicable to tasks requiring the conversion of natural language into a structured, graph-based semantic representation.
  • Linguistic Analysis: Useful for computational linguists and AI developers working on advanced language processing systems that benefit from explicit meaning representations.