richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Nov 19, 2025License:deepseekArchitecture:Transformer0.0K Cold
The richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored model is a 32 billion parameter, decoder-only transformer based on the Qwen2 architecture, developed by richardyoung. It is an uncensored version of deepseek-ai's DeepSeek-R1-Distill-Qwen-32B, featuring a 32,768-token context length. This model specializes in strong chain-of-thought reasoning capabilities without safety refusals, making it suitable for research requiring unrestricted step-by-step analysis.
Loading preview...