TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-fsx-sm0.1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-fsx-sm0.1 is a 2.6 billion parameter Gemma-2-2b based model fine-tuned using the rankalign project. This model is specifically trained for hypernym prediction tasks, focusing on identifying hierarchical relationships between concepts. It is optimized for specific linguistic tasks related to semantic hierarchy, offering a specialized tool for researchers and developers in this domain.

Loading preview...

Model Overview

This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-fsx-sm0.1, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model. It is part of the rankalign project, which focuses on training models for specific linguistic tasks.

Key Training Details

The model underwent two training epochs with a delta of 0.15. The primary training task was hypernym-concat-bananas-to-dogs-double-all, indicating a specialization in identifying hypernym relationships across various categories. Notable training parameters include a preference loss weight of 1, with NLL validator and generator weights set to 0, and force-same-x enabled. A semi-supervised ratio of 0.1 was also applied during training.

Intended Use Cases

This model is specifically designed for tasks involving hypernym prediction and semantic hierarchy identification. It can be evaluated across various hypernym tasks such as hypernym-bananas, hypernym-dogs, hypernym-cars, and others, as demonstrated by the provided evaluation scripts. Developers and researchers working on projects requiring precise identification of 'is-a' relationships between concepts would find this model particularly useful.