reaperdoesntknow/DualMind

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

DualMind by Convergent Intelligence LLC is a 1.7 billion parameter Qwen3ForCausalLM model with a 40,960 token context length, designed for dual-mental-modality reasoning. It employs role tokens (, , ) to simulate internal self-critique and refinement, recreating multi-model collision dynamics within a single architecture. This model excels at logical inference and complex problem-solving by structurally enabling self-correction and dialectical reasoning.

Loading preview...

DualMind: Dual-Mental-Modality Reasoning

DualMind, developed by Convergent Intelligence LLC, is a 1.7 billion parameter model based on the Qwen3ForCausalLM architecture, featuring a substantial 40,960 token context length. Its core innovation is dual-mental-modality reasoning, where a single model uses shared weights but differentiates its internal processes via role tokens:

  • <explore>: For unconstrained reasoning, speculation, and problem derivation.
  • <examine>: For adversarial self-critique, error detection, and verification of the explore output.
  • <response>: For synthesizing a clean, refined final answer from the internal dialogue.

This unique approach allows the model to structurally self-correct, mimicking the benefits of multi-model collision arrays within a single architecture. It provides a mechanism for the model to make mistakes and then critically evaluate them, leading to more robust and refined outputs.

Key Capabilities & Features

  • Self-Correction Mechanism: Explicitly designed for internal critique and refinement, enhancing reasoning quality.
  • Role-Conditioned Generation: Utilizes specific tokens (<explore>, <examine>, <response>) to guide distinct cognitive phases.
  • Logical Inference: Trained on logical inference problems (e.g., KK04/LogicInference_OA) and further refined with Claude Opus 4.6 reasoning samples.
  • Qwen3ForCausalLM Base: Built upon the Disctil-Qwen3-1.7B model, an uncensored and DISC-refined variant of Qwen3.

Good For

  • Complex Problem Solving: Ideal for tasks requiring iterative reasoning, self-correction, and robust verification.
  • Logical and Mathematical Proofs: Its structured explore-examine-response loop is well-suited for deriving and validating logical arguments.
  • Research and Development: Offers a novel approach to AI reasoning, potentially useful for exploring advanced cognitive architectures.