viamr-project/amr-parsing-dapo-single-single-turn-20260217-2338-global-step-5683
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 20, 2026Architecture:Transformer Warm

The viamr-project/amr-parsing-dapo-single-single-turn-20260217-2338-global-step-5683 model is a 2-billion parameter language model developed by viamr-project, specifically fine-tuned for Abstract Meaning Representation (AMR) parsing. It utilizes a Reinforcement Learning (RL) framework, achieving an F1 score of 82.88 on its benchmark. This model excels at converting English sentences into their corresponding AMR structures, making it suitable for semantic parsing tasks.

Loading preview...

Model Overview

This model, viamr-project/amr-parsing-dapo-single-single-turn-20260217-2338-global-step-5683, is a 2-billion parameter language model developed by viamr-project. It is specifically designed and optimized for Abstract Meaning Representation (AMR) parsing.

Key Capabilities

  • AMR Parsing: The primary function of this model is to convert natural language sentences into their Abstract Meaning Representation (AMR) graphs.
  • Reinforcement Learning (RL) Framework: Trained using the veRL framework, indicating a focus on optimizing performance for the specific AMR parsing task.
  • Performance: Achieves a benchmark F1 score of 82.88, with a Precision of 83.73 and Recall of 82.04, demonstrating strong performance in its specialized domain.

Use Cases

This model is particularly well-suited for applications requiring robust semantic parsing, such as:

  • Natural Language Understanding (NLU) systems that need to extract deep semantic meaning from text.
  • Information Extraction where structured representations of sentence meaning are beneficial.
  • Machine Translation or Text Summarization systems that can leverage AMR as an intermediate representation.

Its specialization in AMR parsing differentiates it from general-purpose LLMs, offering a focused solution for this complex linguistic task.