Overview
The castorini/rank_vicuna_7b_v1_noda_fp16 is a 7 billion parameter auto-regressive language model developed by Castorini. It is fine-tuned from the Llama 2 base model, specifically leveraging lmsys/vicuna-7b-v1.5 through supervised instruction fine-tuning. This particular version is notable for being trained without data augmentation and is provided in an FP16 format.
Key Capabilities
- Ranking Tasks: Primarily intended for research applications involving the ranking capabilities of large language models.
- Information Retrieval Research: Designed to explore the intersection of LLMs and information retrieval systems.
- Llama 2 Foundation: Benefits from the robust architecture and pre-training of the Llama 2 model family.
Good For
- Researchers: Ideal for academics and scientists working on natural language processing and information retrieval.
- Hobbyists: Suitable for enthusiasts exploring advanced LLM applications in ranking and retrieval.
- Experimental Setups: Provides a specific variant (no data augmentation, FP16) for comparative studies in model training and performance. Evaluation details can be found in the associated paper, with current evaluations on DL19/DL20.