ParasiticRogue/Magnum-Instruct-DPO-12B
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Aug 16, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Magnum-Instruct-DPO-12B by ParasiticRogue is a 12 billion parameter instruction-tuned causal language model, built from a 50/50 merge of Mistral-Nemo variants that underwent additional DPO/ORPO training. This model is designed for conversational AI, excelling in persona adherence, detailed environmental descriptions, and dynamic narrative progression, particularly suited for uncensored and immersive chat applications with a specific system prompt structure. It features a 32768 token context length, making it suitable for extended interactions and complex scenarios.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p