AI-ISL/DeepSeek-R1-Distill-Qwen-7B-SP
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:May 26, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

AI-ISL/DeepSeek-R1-Distill-Qwen-7B-SP is a 7.6 billion parameter language model, a SAFEPATH-aligned version of DeepSeek-R1-Distill-Qwen-7B. It is fine-tuned using prefix-only safety priming to improve safety without compromising reasoning performance. This model is designed for research into safety alignment in Large Reasoning Models and robust reasoning under adversarial settings.

Loading preview...