dphn/Dolphin3.0-R1-Mistral-24B
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 6, 2025Architecture:Transformer0.2K Warm

Dolphin3.0-R1-Mistral-24B is a 24 billion parameter instruct-tuned model from the Dolphin 3.0 series, developed by Eric Hartford, Ben Gitter, BlouseJury, and Cognitive Computations. Built on the Mistral architecture with a 32768 token context length, it is designed as a general-purpose local model excelling in reasoning, coding, math, and agentic tasks. This R1 version is specifically trained on 800k reasoning traces to enhance its general-purpose reasoning capabilities, aiming to provide a steerable alternative to proprietary models.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p