UCSC-VLAA/STAR1-R1-Distill-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

UCSC-VLAA/STAR1-R1-Distill-14B is a 14.8 billion parameter language model developed by UCSC-VLAA, fine-tuned on the STAR-1 safety dataset. This model is based on the Qwen architecture and is specifically designed to enhance safety alignment in large reasoning models. It achieves significant safety improvements across benchmarks while maintaining reasoning capabilities, making it suitable for applications requiring robust safety in AI outputs.

Loading preview...