zyznull/RankingGPT-llama2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 29, 2023License:mitArchitecture:Transformer Open Weights Cold

RankingGPT-llama2-7b is a 7 billion parameter Llama 2-based text ranker developed by zyznull, designed for effective text ranking. This model demonstrates significant effectiveness in both in-domain and out-domain scenarios. It is specifically optimized to score the relevance of a document to a given query, making it suitable for information retrieval and search applications. The model leverages the Llama 2 architecture to provide robust ranking capabilities.

Loading preview...

Overview

RankingGPT-llama2-7b is a 7 billion parameter text ranking model built upon the Llama 2 architecture. Developed by zyznull, this model is part of the RankingGPT series, which focuses on enhancing large language models for text ranking tasks. It is designed to evaluate the relevance of a document to a specific query, providing a score that indicates their semantic relationship.

Key Capabilities

  • Text Ranking: Excels at scoring the relevance between a query and a document.
  • In-domain and Out-domain Effectiveness: Demonstrates strong performance across various datasets and contexts.
  • Llama 2 Base: Leverages the robust Llama 2 architecture for its ranking capabilities.
  • Benchmarked Performance: Achieves competitive results on standard ranking benchmarks such as DL19 (76.2), DL20 (76.3), and BEIR (57.8), outperforming several MonoBERT and MonoT5 variants, and RankLLaMA.

Usage

This model can be integrated into applications requiring document-query relevance scoring. A Python code example is provided for calculating a relevance score using the transformers library, demonstrating how to tokenize input, pass it through the model, and compute a score based on log probabilities. This makes it suitable for search engines, recommendation systems, and other information retrieval tasks where precise ranking is crucial.