askalgore/Dolphin-Mistral-24B-Venice-Edition-heretic-2

Warm
Public
24B
FP8
32768
Nov 21, 2025
License: apache-2.0
Hugging Face
Overview

Dolphin-Mistral-24B-Venice-Edition-heretic-2 Overview

This model is a 24 billion parameter Mistral-based language model, a "decensored" version of the original dphn/Dolphin-Mistral-24B-Venice-Edition, created using the Heretic v1.0.1 tool. It maintains a 32768 token context length.

Key Differentiators & Capabilities

  • Reduced Refusals: Demonstrates a significantly lower refusal rate (6/100) compared to its base model (10/100), indicating a more permissive response generation.
  • User-Controlled Alignment: Unlike many commercial models, Dolphin-Mistral-24B-Venice-Edition-heretic-2 emphasizes user control over alignment, system prompts, and data. It does not impose its own ethics or guidelines, allowing users to define the model's behavior.
  • General Purpose: Aims to be a versatile, general-purpose model suitable for a wide range of applications, similar to models like ChatGPT or Claude, but with enhanced steerability.
  • Stable System Prompting: Designed to prevent issues caused by external changes to system prompts or model versions, offering stability for businesses integrating AI into their products.

Performance & Usage

Performance metrics show a KL divergence of 0.01, indicating minimal deviation from the original model's statistical properties while achieving reduced refusals. The model maintains Mistral's default chat template and recommends using a relatively low temperature (e.g., temperature=0.15) for optimal output. It is compatible with various frameworks including ollama, LM Studio, Huggingface Transformers, vllm, sglang, and tgi.