UCSC-VLAA/STAR1-R1-Distill-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

UCSC-VLAA/STAR1-R1-Distill-32B is a 32.8 billion parameter language model developed by UCSC-VLAA, fine-tuned on the STAR-1 safety dataset. This model is based on the R1-Distill-Qwen architecture and is specifically designed to enhance safety alignment in large reasoning models. It aims to improve safety practices with minimal impact on core reasoning capabilities, making it suitable for applications requiring robust and safe AI reasoning.

Loading preview...