TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-p0-nv1-ng1-fsx

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

The TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-p0-nv1-ng1-fsx model is a 2.6 billion parameter language model fine-tuned from Google's Gemma-2-2b base model. It is specifically optimized for hypernym prediction tasks, trained using the rankalign project's methodology. This model excels at identifying hierarchical relationships between concepts, particularly within specific semantic categories like 'hypernym-concat-bananas-to-dogs-double-all'. Its primary application is in tasks requiring precise understanding and generation of superordinate-subordinate word relationships.

Loading preview...

Model Overview

This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-p0-nv1-ng1-fsx, is a specialized fine-tuned checkpoint derived from the google/gemma-2-2b base model. It is part of the rankalign project, which focuses on improving the alignment of language models for specific linguistic tasks.

Key Capabilities

  • Hypernym Prediction: The model is specifically trained for hypernym prediction, a task that involves identifying a more general term (hypernym) for a given specific term (hyponym).
  • Specialized Training: It underwent a fine-tuning process (version v6, epoch 2) with a delta of 0.15, targeting a task described as hypernym-concat-bananas-to-dogs-double-all. This indicates a focus on concatenating and doubling hypernym relationships across various categories.
  • Controlled Generation: Training parameters include a preference loss weight of 0, NLL validator weight of 1, and NLL generator weight of 1, along with force-same-x, suggesting a controlled approach to generation and validation during fine-tuning.

Good For

  • Linguistic Research: Ideal for researchers and developers working on semantic hierarchies, lexical relations, and knowledge graph construction.
  • Specific Hypernym Tasks: Particularly suited for tasks involving the identification and generation of hypernyms within the domains it was trained on, such as those exemplified by the evaluation scripts (e.g., hypernym-bananas, hypernym-dogs, hypernym-cars).
  • Understanding Rank Alignment: Provides a practical example of the rankalign project's methodology in action, offering insights into how models can be fine-tuned for specific relational understanding.