UCSC-VLAA/STAR1-R1-Distill-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

UCSC-VLAA/STAR1-R1-Distill-8B is an 8 billion parameter Llama-based language model developed by UCSC-VLAA, fine-tuned on the STAR-1 dataset. This model is specifically designed to enhance safety alignment in large reasoning models while maintaining reasoning capabilities. It integrates and refines data from multiple sources, providing policy-grounded reasoning samples to improve safety performance across benchmarks. The model is optimized for applications requiring safer and more aligned reasoning outputs.

Loading preview...