POLARIS-Project/Polaris-7B-Preview
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jun 12, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

POLARIS-Project/Polaris-7B-Preview is a 7.6 billion parameter language model developed by POLARIS-Project, enhanced through a post-training reinforcement learning (RL) recipe. This model specializes in advanced reasoning tasks, demonstrating significant improvements on challenging benchmarks. It leverages open-source data and academic resources to achieve high performance, even surpassing some commercial systems in specific reasoning evaluations. The model is built upon base models like Qwen3-4B and DeepSeek-R1-Distill-Qwen-7B, optimized for complex problem-solving.

Loading preview...