TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tco-ln-vlo-fsx

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

The TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tco-ln-vlo-fsx model is a 2.6 billion parameter language model fine-tuned from Google's Gemma-2-2b base model. It is specifically optimized for hypernym prediction tasks, focusing on identifying broader categories for given concepts. This model is part of the rankalign project, designed for tasks involving hierarchical semantic relationships, and utilizes online typicality correction and length normalization during its training.

Loading preview...

Model Overview

This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tco-ln-vlo-fsx, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model. It is part of the rankalign project, which focuses on tasks related to semantic ranking and alignment.

Key Training Details

The model underwent specific fine-tuning with the following notable parameters:

  • Base Model: google/gemma-2-2b
  • Version: v6
  • Task: hypernym-concat-bananas-to-dogs-double-all, indicating a focus on hypernym prediction across a diverse set of concepts.
  • Epochs: Trained for 2 epochs.
  • Delta: A delta value of 0.15 was applied.
  • Typicality Correction: Utilizes online typicality correction.
  • Length Normalization: Enabled during training.
  • Validator Log-Odds: True, suggesting a focus on validating log-odds for improved performance.

Use Cases

This model is particularly suited for research and applications involving:

  • Hypernym Prediction: Identifying superordinate concepts for given terms.
  • Semantic Hierarchy Tasks: Understanding and generating hierarchical relationships between words or phrases.
  • Linguistic Analysis: Exploring and evaluating models' capabilities in semantic generalization and categorization.