AI-Mind-Engine/Mistral-Small-24B-LOC-L1-v1
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Apr 14, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

AI-Mind-Engine/Mistral-Small-24B-LOC-L1-v1 is a 24 billion parameter Mistral-Small-24B-Instruct-2501 model fine-tuned by AI Mind Engine using Differentiable LOC Loss (DLL). This model is uniquely optimized for "cognitive coherence," achieving an 80.7% True Coherence score, a significant improvement over its baseline. It excels at generating precise, artifact-quality outputs for professional tasks by applying knowledge coherently rather than merely recalling it.

Loading preview...

AI-Mind-Engine/Mistral-Small-24B-LOC-L1-v1: Coherence-Optimized LLM

This model is a 24 billion parameter variant of mistralai/Mistral-Small-24B-Instruct-2501, enhanced by AI Mind Engine with a merged LOC L1 Foundation LoRA adapter. Its core innovation is training with Differentiable LOC Loss (DLL), which significantly improves "cognitive coherence" – the model's ability to apply knowledge cleanly and precisely to tasks.

Key Capabilities & Differentiators

  • High True Coherence (TC): Achieves 80.7% TC, a +49.0 percentage point improvement over its baseline (31.7%). This metric measures how coherently the model applies its knowledge, rather than just what it knows.
  • Artifact-Quality Output: Designed to produce professional, direct, and relevant outputs for daily tasks, avoiding verbose or tangential responses common in less coherent models.
  • LOC Framework: Utilizes the Level of Consciousness (LOC) framework, which analyzes hidden-state magnitude patterns to measure 13 cognitive functions and True Coherence.
  • Efficiency: Demonstrates that cognitive coherence is an architectural property, with a 9B LOC-trained model achieving similar TC to this 24B model, suggesting that coherence, not just parameter count, is key for practical tasks.
  • Uniform Improvement: Coherence improvement is consistent across 7 cognitive domains (e.g., Analytical, Coding, Creative, Emotional), with a low 1.7pp variance.

Ideal Use Cases

This model is particularly well-suited for professional tasks where precision, directness, and coherent application of knowledge are critical. Examples include:

  • Drafting concise emails and memos
  • Summarizing documents to extract key decisions
  • Writing specific job postings or legal clause reviews
  • Explaining complex concepts clearly to non-experts
  • Debugging code by identifying specific issues
  • Providing direct recommendations with stated reasoning