TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tco-fsx

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

The TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tco-fsx is a 2.6 billion parameter language model, fine-tuned from Google's Gemma-2-2b base model. It is part of the rankalign project, specifically optimized for tasks involving hypernym detection and classification. This model is designed for research into linguistic relationships, particularly in identifying broader categories for given concepts.

Loading preview...

Model Overview

This model, TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tco-fsx, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model. It is developed as part of the rankalign project, focusing on specific linguistic tasks.

Training Details

The model underwent a specialized training regimen, reaching version 6 and epoch 2. Key training parameters include:

  • Base Model: google/gemma-2-2b
  • Task: hypernym-concat-bananas-to-dogs-double-all
  • Delta: 0.15
  • Typicality Correction: Online
  • Preference Loss Weight: 1

Key Capabilities

This model is specifically designed and fine-tuned for:

  • Hypernym Detection: Identifying broader categories (hypernyms) for various concepts.
  • Linguistic Relationship Analysis: Researching and evaluating hierarchical semantic relationships between words.

Intended Use Cases

This model is particularly suitable for:

  • Academic Research: Exploring and validating methods for hypernym identification.
  • Linguistic Studies: Analyzing semantic hierarchies and word relationships.
  • Reproducibility: The README provides detailed evaluation scripts for various hypernym tasks (e.g., hypernym-bananas, hypernym-dogs, hypernym-cars), allowing researchers to reproduce and extend its findings.