viamr-project/qwen3-1.7b-amr-20260204-1017
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 4, 2026Architecture:Transformer Warm

The viamr-project/qwen3-1.7b-amr-20260204-1017 is a 2 billion parameter language model developed by viamr-project, specifically fine-tuned for Abstract Meaning Representation (AMR) parsing. This model utilizes a Qwen3 architecture and was trained using the veRL (Reinforcement Learning) framework. It achieves an F1 score of 81.5, a Precision of 82.6, and a Recall of 80.42 on its benchmark, making it suitable for tasks requiring semantic graph generation from text.

Loading preview...

Overview

This model, qwen3-1.7b-amr-20260204-1017, is a 2 billion parameter language model developed by viamr-project. It is built upon the Qwen3 architecture and is specifically designed for Abstract Meaning Representation (AMR) parsing. The model was trained using the veRL (Reinforcement Learning) framework to optimize its performance on this specialized task.

Key Capabilities

  • Abstract Meaning Representation (AMR) Parsing: Converts English sentences into their corresponding AMR graph representations.
  • Reinforcement Learning (RL) Training: Benefits from a veRL-based training approach, indicating a focus on task-specific performance optimization.

Performance Metrics

On its internal benchmark, the model demonstrates solid performance for AMR parsing:

  • F1 Score: 81.5
  • Precision: 82.6
  • Recall: 80.42

When to Use This Model

This model is particularly well-suited for applications requiring:

  • Semantic understanding: Extracting the core meaning and relationships from natural language.
  • Natural Language Understanding (NLU) pipelines: As a component for generating structured semantic representations.
  • Research and development in AMR: For experimenting with or deploying AMR parsing capabilities.