neulab/SP3F-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 15, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold

SP3F-7B is a 7.6 billion parameter multilingual model developed by neulab, built upon the Qwen2.5-7B base and trained using Self-Play with Privileged Pairwise Feedback. This training methodology significantly enhances its performance across various multilingual reasoning and mathematical benchmarks, making it particularly strong in areas like MGSM and MT Math100. With a context length of 32768 tokens, SP3F-7B is optimized for complex multilingual tasks requiring robust reasoning capabilities.

Loading preview...