The askalgore/Dolphin-Mistral-24B-Venice-Edition-heretic-2 is a 24 billion parameter Mistral-based language model, derived from dphn/Dolphin-Mistral-24B-Venice-Edition and further modified using Heretic v1.0.1. This model is specifically engineered for reduced refusal rates and enhanced steerability, allowing users to define alignment and system prompts without imposed ethical or safety guidelines. With a 32768 token context length, it is designed for general-purpose applications where user control over content generation and data privacy are paramount.
No reviews yet. Be the first to review!