askalgore/Dolphin-Mistral-24B-Venice-Edition-heretic-2
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Nov 21, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
The askalgore/Dolphin-Mistral-24B-Venice-Edition-heretic-2 is a 24 billion parameter Mistral-based language model, derived from dphn/Dolphin-Mistral-24B-Venice-Edition and further modified using Heretic v1.0.1. This model is specifically engineered for reduced refusal rates and enhanced steerability, allowing users to define alignment and system prompts without imposed ethical or safety guidelines. With a 32768 token context length, it is designed for general-purpose applications where user control over content generation and data privacy are paramount.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–