DavidAU's Dolphin-Mistral-GLM-4.7-Flash-24B-Venice-Edition-Thinking-Uncensored is a 24 billion parameter language model, fine-tuned from Dolphin-Mistral-24B-Venice-Edition, with a 32,768 token context length. It integrates GLM 4.7 Flash thinking/reasoning capabilities, converting the base model from an "instruct" to a "thinking" paradigm. This model is designed to be completely uncensored, providing detailed and precise reasoning for all content generation tasks without imposed ethical or safety guidelines, making it suitable for diverse and unrestricted applications.
No reviews yet. Be the first to review!