castorini/rank_vicuna_7b_v1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Sep 27, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

The castorini/rank_vicuna_7b_v1 is a 7 billion parameter auto-regressive language model developed by Castorini, based on the Llama 2 transformer architecture with a 4096-token context length. It is fine-tuned from lmsys/vicuna-7b-v1.5 using supervised instruction fine-tuning and augmented data. This model is specifically designed for research at the intersection of large language models and information retrieval, particularly for ranking tasks.

Loading preview...