dphn/Dolphin-Mistral-24B-Venice-Edition
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jun 12, 2025License:apache-2.0Architecture:Transformer0.5K Open Weights Warm
Dolphin Mistral 24B Venice Edition is a 24 billion parameter Mistral-based language model developed collaboratively by dphn and Venice.ai, featuring a 32768 token context length. This model is specifically designed to be uncensored and highly steerable, allowing users full control over system prompts and alignment. It aims to provide a general-purpose AI tool that prioritizes user control and data privacy, making it suitable for applications requiring custom ethical guidelines and consistent model behavior.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p