TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tcs-ln-fsx-lo0.1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tcs-ln-fsx-lo0.1 is a 2.6 billion parameter Gemma-2-2b based language model fine-tuned as part of the rankalign project. This model is specifically optimized for hypernym prediction tasks, trained with a focus on identifying hierarchical relationships between concepts. It utilizes a unique training configuration including length normalization and self-typicality correction. The model is designed for research and evaluation in semantic relation extraction, particularly for hypernymy.

Loading preview...

Model Overview

TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tcs-ln-fsx-lo0.1 is a 2.6 billion parameter language model derived from google/gemma-2-2b. It is a fine-tuned checkpoint from the rankalign project, specifically developed for tasks involving hypernym prediction.

Key Training Details

This model was trained with a particular focus on the hypernym-concat-bananas-to-dogs-double-all task, undergoing 2 epochs of fine-tuning. Notable training parameters include:

  • Base Model: google/gemma-2-2b
  • Delta: 0.15
  • Typicality Correction: Self-correction mechanism
  • Length Normalization: Enabled (True)
  • Preference Loss Weight: 1
  • Force Same-X: Enabled (True)
  • Labeled-only Ratio: 0.1

Use Cases and Evaluation

This model is primarily intended for research and evaluation in the domain of semantic relation extraction, particularly for identifying hypernyms. The README provides detailed Python evaluation scripts for various hypernym tasks (e.g., hypernym-bananas, hypernym-dogs, hypernym-cars), demonstrating its application in assessing hierarchical conceptual relationships. Developers can use these scripts to reproduce evaluations and analyze the model's performance on specific hypernym datasets.