RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Paper
•
2309.15088
•
Published
RankVicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
This specific model is a 7B variant and is trained with data augmentation. It is also worth noting that it is converted to FP16.
The primary use of RankVicuna is research at the intersection of large language models and retrieval. The primary intended users of the model are researchers and hobbyists in natural language processing and information retrieval.
RankVicuna is finetuned from lmsys/vicuna-7b-v1.5 with supervised instruction fine-tuning.
RankVicuna is currently evaluated on DL19/DL20. See more details in our paper.