jhu-clsp/FollowIR-7B
FollowIR-7B is a 7 billion parameter instruction-tuned language model developed by jhu-clsp, based on Mistral-7B-Instruct-v0.2. Fine-tuned on the FollowIR dataset, it specializes in reranking for information retrieval tasks. This model excels at following human-written instructions for determining document relevance, outperforming other retrieval models in instruction adherence.
Loading preview...
FollowIR-7B: Instruction-Tuned for Retrieval Reranking
FollowIR-7B is a 7 billion parameter language model developed by jhu-clsp, specifically fine-tuned for information retrieval reranking. Built upon the Mistral-7B-Instruct-v0.2 architecture, this model leverages instruction-tuning with the unique FollowIR dataset.
Key Capabilities
- Instruction Following: Demonstrates superior performance in understanding and executing human-written instructions for relevance judgments in retrieval tasks.
- Retrieval Reranking: Optimized to assess the relevance of a document to a given query, making it ideal for improving search result quality.
- Contextual Relevance: Utilizes a 4096-token context length to evaluate query-document pairs effectively.
Training and Differentiation
FollowIR-7B was fine-tuned using LLaMA-Factory, transforming retrieval data into an instruction-following format. This approach allows the model to interpret nuanced instructions, a capability where it reportedly outperforms other retrieval models. For more technical details, refer to the associated research paper.