Doctor-Shotgun/MS3.2-24B-Magnum-Diamond

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jun 22, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

Doctor-Shotgun/MS3.2-24B-Magnum-Diamond is a 24 billion parameter language model fine-tuned from Mistral-Small-3.2-24B-Instruct-2506, designed for creative writing and roleplay with a 32768 token context length. This model aims to emulate the prose style of Claude 3 Sonnet/Opus models at a smaller, local scale. It utilizes an rsLoRA adapter and a modified loss masking during training, making it suitable for generating engaging narrative content.

Loading preview...

Overview

Doctor-Shotgun/MS3.2-24B-Magnum-Diamond is a 24 billion parameter model, fine-tuned from Mistral-Small-3.2-24B-Instruct-2506. It is an rsLoRA adapter-based model, developed with the goal of providing a smaller, more accessible alternative to larger models while maintaining high-quality creative output. The model's training incorporated pre-tokenization and custom loss masking, using the same data mix as Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha.

Key Capabilities

  • Creative Writing & Roleplay: Specifically optimized to generate high-quality prose for creative writing and roleplay scenarios, aiming to replicate the style of Claude 3 Sonnet/Opus models.
  • Flexible Prompting: Designed to perform competently with or without prepending character names and prefill, offering flexibility in usage.
  • Mistral v7 Tekken Prompt Format: Follows the Mistral v7 Tekken prompt format, with optional prefill recommended for roleplay settings.

When to Use This Model

  • Creative Storytelling: Ideal for generating engaging narratives, character dialogues, and descriptive scenes.
  • Roleplay Applications: Suited for interactive roleplaying, providing nuanced and contextually rich responses.
  • Local Deployment: Its 24B parameter size makes it more consumer-friendly for local deployment compared to larger models, while still aiming for high-quality output.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p