TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-nv1-ng1-fsx

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-nv1-ng1-fsx is a 2.6 billion parameter Gemma-2-2B based model fine-tuned using the rankalign project. This model is specifically optimized for hypernym-concat-bananas-to-dogs-double-all tasks, focusing on semantic relationship identification. It is designed for specialized natural language understanding applications requiring precise hierarchical concept recognition.

Loading preview...

Model Overview

This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-nv1-ng1-fsx, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model, developed as part of the rankalign project. It features 2.6 billion parameters and a context length of 8192 tokens.

Key Characteristics

  • Base Model: google/gemma-2-2b
  • Fine-tuning Objective: Specialized for the hypernym-concat-bananas-to-dogs-double-all task, indicating a focus on identifying hypernymic (is-a) relationships between concepts.
  • Training Details: The model underwent 2 epochs of training with a delta value of 0.15. It incorporates specific configurations for preference loss, NLL validator, and NLL generator weights, all set to 1.
  • Constraint: Training enforced Force same-x, suggesting a focus on consistent output generation within specific contexts.

Use Cases

This model is particularly suited for research and applications involving:

  • Semantic Hierarchy Understanding: Tasks that require identifying and classifying hierarchical relationships between words or concepts.
  • Specialized NLP Research: Experiments and evaluations within the rankalign framework, especially for hypernym detection.
  • Reproducibility: The README provides detailed evaluation scripts for various hypernym tasks (e.g., hypernym-bananas, hypernym-dogs, hypernym-elephants), allowing for direct replication and comparison of results.