🌌 Overview
Hamanasu-Magnum-QwQ-32B is a 32.8 billion parameter language model developed by Delta-Vector. It is a fine-tune of the Delta-Vector/Hamanasu-QwQ-V2-RP base model, specifically engineered to emulate the prose style found in Claude models, such as Opus and Sonnet. This model is particularly well-suited for traditional roleplay (RP) applications.
Key Training Details
- Base Model:
Delta-Vector/Hamanasu-QwQ-V2-RP - Hardware: Trained using 8x H100 GPUs
- Epochs: 2
- Context Length: Supports a sequence length of 32768 tokens (as per Axolotl config, which implies a larger effective context for the model).
💰 Prompting and Usage
The model utilizes ChatML formatting for prompts. A recommended sampler preset is provided, emphasizing specific temperature, min_p, and a detailed System_Prompt designed to guide the model in maintaining character persona, driving narrative, and incorporating descriptive elements while avoiding repetitive or overly embellished outputs. Quantized versions, including EXL2, are available for efficient deployment.