TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-ln-p0-nv1-ng1-fsx

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 6, 2026Architecture:Transformer Warm

The TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-ln-p0-nv1-ng1-fsx model is a 2.6 billion parameter language model based on Google's Gemma-2-2b architecture. It is a fine-tuned checkpoint from the rankalign project, specifically optimized for hypernym-concat tasks. This model is designed for research into preference alignment and NLL validation, focusing on specific linguistic relationships.

Loading preview...

Overview

This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-ln-p0-nv1-ng1-fsx, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model. It is part of the rankalign project, which focuses on preference alignment and NLL validation techniques.

Training Details

The model underwent 2 epochs of training with a delta of 0.15. Key training parameters include:

  • Base model: google/gemma-2-2b
  • Task: hypernym-concat-bananas-to-dogs-double-all
  • Length normalization: Enabled
  • Preference loss weight: 0
  • NLL validator weight: 1
  • NLL generator weight: 1
  • Force same-x: Enabled

Use Cases

This model is primarily intended for research and evaluation within the context of the rankalign project, particularly for tasks involving hypernym relationships. The provided evaluation scripts demonstrate its use across various hypernym tasks (e.g., hypernym-bananas, hypernym-dogs, hypernym-chairs), suggesting its utility for analyzing and understanding specific linguistic patterns and preference alignment in language models.