TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tcs-fsx-lo0.1
TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tcs-fsx-lo0.1 is a 2.6 billion parameter language model fine-tuned from Google's Gemma-2-2b base model. Developed by TAUR-dev as part of the rankalign project, this model is specifically optimized for hypernym-concat-bananas-to-dogs-double-all tasks. It incorporates a preference loss weight of 1 and self-typicality correction, making it suitable for specialized linguistic relation extraction and classification.
Loading preview...
Model Overview
This model, rankalign-v6-gemma-2-2b-d0.15-e2-hc-b2d-dbl-all-tcs-fsx-lo0.1, is a fine-tuned checkpoint derived from the google/gemma-2-2b base model. It is part of the rankalign project, focusing on specific linguistic tasks.
Training Details
The model underwent two training epochs with a delta of 0.15. Key training parameters include:
- Task:
hypernym-concat-bananas-to-dogs-double-all - Preference Loss Weight: 1
- NLL Validator/Generator Weight: 0
- Typicality Correction: Self
- Force Same-X: True
- Labeled-only Ratio: 0.1
Use Cases
This model is particularly suited for research and applications involving hypernym relation extraction and classification, as indicated by its training task. The provided evaluation scripts demonstrate its application across various hypernym-related datasets, such as hypernym-bananas, hypernym-dogs, and hypernym-chairs.