mrrob5011/Dolphin-Mistral-24B-Venice-Edition

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Apr 13, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Dolphin Mistral 24B Venice Edition is a 24 billion parameter Mistral-based language model developed in collaboration by mrrob5011 (Dolphin) and Venice.ai. This model is specifically fine-tuned to be uncensored and highly steerable, allowing users full control over system prompts and alignment. It is designed for general-purpose applications where custom ethical guidelines and data privacy are paramount, offering an alternative to heavily aligned commercial models.

Loading preview...

Dolphin Mistral 24B Venice Edition: Uncensored and Steerable

Dolphin Mistral 24B Venice Edition is a 24 billion parameter language model, a collaborative effort between Dolphin (mrrob5011) and Venice.ai. Its primary distinction lies in its uncensored nature and user steerability, developed to provide an alternative to heavily aligned commercial LLMs.

Key Capabilities & Differentiators

  • Uncensored Responses: Designed to follow instructions without ethical, legal, or moral reservations, as defined by the user.
  • Full System Prompt Control: Users dictate the model's tone, alignment, and behavior via the system prompt, ensuring responses align with specific application needs.
  • Data Privacy: Unlike many commercial models, Dolphin Mistral 24B Venice Edition ensures user queries remain private, as the system owner maintains control over their data.
  • General Purpose: Aims to be a versatile model suitable for a wide range of applications, similar to models like ChatGPT or Claude, but with user-defined alignment.
  • Mistral Architecture: Built upon the Mistral 24B base, maintaining its default chat template for consistency.

Use Cases & Recommendations

This model is particularly well-suited for developers and businesses who require:

  • Custom Alignment: Applications where a one-size-fits-all alignment is insufficient, and specific, user-defined ethical or behavioral guidelines are necessary.
  • Data Sovereignty: Scenarios where query data privacy and control are critical.
  • Flexibility: Use cases demanding a highly steerable model that adapts precisely to the system prompt's instructions.

It is recommended to use a relatively low temperature (e.g., temperature=0.15) for optimal performance and to explicitly set a system prompt to define the model's behavior.