yufeng1/R1-Distill-Qwen-7B-type6-e5-alpha0_625
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 20, 2026Architecture:Transformer Cold

The yufeng1/R1-Distill-Qwen-7B-type6-e5-alpha0_625 model is a 7.6 billion parameter language model. This model is a distilled version of a Qwen-based architecture, indicated by 'Distill-Qwen-7B'. Specific details regarding its training, unique capabilities, or primary differentiators are not provided in the available model card, suggesting it may be a general-purpose language model or a base for further fine-tuning.

Loading preview...