TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-vlo-fsx-lo0.1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-vlo-fsx-lo0.1 is a 2.6 billion parameter Gemma-2-2b base model fine-tuned as part of the rankalign project. This model is specifically trained for hypernym-concat-bananas-to-dogs-double-all tasks, focusing on identifying hierarchical relationships between concepts. It utilizes a preference loss weight of 1 and validator log-odds, making it suitable for research and development in semantic relation extraction and knowledge graph construction.

Loading preview...

Model Overview

TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-vlo-fsx-lo0.1 is a specialized fine-tuned checkpoint derived from the google/gemma-2-2b base model. Developed within the rankalign project, this version (v6) is trained for specific hypernym-related tasks, particularly those involving concatenated concepts like 'bananas-to-dogs'. It incorporates a delta of 0.15 and was trained for 2 epochs.

Key Training Details

  • Base Model: google/gemma-2-2b
  • Task: hypernym-concat-bananas-to-dogs-double-all
  • Epochs: 2
  • Delta: 0.15
  • Preference Loss Weight: 1
  • Validator Log-Odds: Enabled
  • Labeled-only Ratio: 0.1

Use Cases and Evaluation

This model is designed for research and evaluation in the domain of hypernym detection and semantic hierarchy understanding. The README provides detailed Python scripts for evaluating the model across various hypernym tasks, including 'hypernym-bananas', 'hypernym-dogs', 'hypernym-cars', and others. These scripts demonstrate how to assess the model's performance using zero-shot generation and few-shot discrimination with validator log-odds, making it a valuable tool for researchers exploring fine-grained semantic relationships.