dchh88/Midnight-Miqu-70B-v1.5

TEXT GENERATIONConcurrency Cost:4Model Size:69BQuant:FP8Ctx Length:32kPublished:Apr 23, 2026License:otherArchitecture:Transformer Cold

dchh88/Midnight-Miqu-70B-v1.5 is a 69 billion parameter uncensored language model, created by dchh88, resulting from a DARE Linear merge of sophosympatheia/Midnight-Miqu-70B-v1.0 and migtissera/Tess-70B-v1.6. Designed for roleplaying and storytelling, this model supports a 32K context length and offers enhanced performance over its v1.0 predecessor in specific tests without sacrificing writing quality. It is optimized for creative applications, particularly with recommended sampler settings like Quadratic Sampling and Min-P.

Loading preview...

Midnight-Miqu-70B-v1.5 Overview

Midnight-Miqu-70B-v1.5 is a 69 billion parameter uncensored language model, developed by dchh88, utilizing a DARE Linear merge method. It combines the strengths of sophosympatheia/Midnight-Miqu-70B-v1.0 and migtissera/Tess-70B-v1.6, building upon a base of 152334H/miqu-1-70b-sf.

Key Capabilities & Features

  • Specialized for Creative Tasks: Primarily designed for roleplaying and storytelling, offering strong performance in these areas.
  • Uncensored: Provides an uncensored output, with users responsible for its application.
  • Extended Context Window: Supports a 32,768 token context length, similar to Miqu models, with alpha_rope set to 1.
  • Improved Performance: Demonstrates enhanced performance in specific internal tests compared to Midnight Miqu v1.0, while maintaining writing quality.
  • Sampler Optimization: Recommends specific sampler settings like Quadratic Sampling (smoothing factor ~0.2) and Min-P for optimal creative output.
  • Prompting Flexibility: Highly responsive to system prompts and benefits from few-shot prompting for improved results.

Intended Use Cases

  • Roleplaying: Excels in generating dynamic and engaging roleplay scenarios.
  • Storytelling: Suitable for creative writing and narrative generation.

Important Considerations

  • Licensing: This model is derived from a leaked version of a Mistral model and is only suitable for personal use. Users assume all legal risks associated with its use.
  • Evaluation: Achieves an average score of 25.22 on the Open LLM Leaderboard, with specific scores including 61.18 on IFEval (0-Shot) and 31.39 on MMLU-PRO (5-shot).