Salesforce/SweRankLLM-Small
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jun 24, 2025License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

SweRankLLM-Small is a 7.6 billion parameter language model developed by Salesforce, based on Qwen2.5-Coder-7B-Instruct, with a 131072 token context length. It is specifically fine-tuned for listwise code-reranking, significantly enhancing the quality of results for software issue localization. This model is designed to be combined with performant code retrievers like SweRankEmbed to improve search and ranking in software development contexts.

Loading preview...