TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-nv1-ng1-vlo-fsx-sm0.1
TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-nv1-ng1-vlo-fsx-sm0.1 is a 2.6 billion parameter Gemma-2-2b model fine-tuned by TAUR-dev as part of the rankalign project. This model is specifically optimized for hypernym generation tasks, focusing on identifying broader categories for given concepts. It leverages a unique training methodology involving hypernym concatenation and a preference loss function, making it suitable for specialized semantic relationship extraction.
Loading preview...
Overview
This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-nv1-ng1-vlo-fsx-sm0.1, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model. Developed by TAUR-dev within the rankalign project, its primary focus is on hypernym generation, specifically trained on tasks involving the concatenation of hypernyms (e.g., "bananas to dogs").
Key Training Details
- Base Model:
google/gemma-2-2b - Task:
hypernym-concat-bananas-to-dogs-double-all - Epochs: 2
- Delta: 0.15
- Preference Loss Weight: 1
- NLL Validator/Generator Weight: 1
- Validator Log-Odds: Enabled
- Semi-supervised Ratio: 0.1
Use Cases
This model is particularly suited for research and applications requiring precise identification and generation of hypernyms. Its specialized training on concatenated hypernym tasks suggests strong performance in understanding and categorizing semantic relationships, making it valuable for knowledge graph construction, semantic search, and linguistic analysis where hierarchical concept understanding is crucial. The provided evaluation scripts demonstrate its intended use across various hypernym tasks like hypernym-bananas, hypernym-cars, and hypernym-dogs.