DavidAU/Dolphin-Mistral-GLM-4.7-Flash-24B-Venice-Edition-Thinking-Uncensored
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jan 30, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

DavidAU's Dolphin-Mistral-GLM-4.7-Flash-24B-Venice-Edition-Thinking-Uncensored is a 24 billion parameter language model, fine-tuned from Dolphin-Mistral-24B-Venice-Edition, with a 32,768 token context length. It integrates GLM 4.7 Flash thinking/reasoning capabilities, converting the base model from an "instruct" to a "thinking" paradigm. This model is designed to be completely uncensored, providing detailed and precise reasoning for all content generation tasks without imposed ethical or safety guidelines, making it suitable for diverse and unrestricted applications.

Loading preview...