ConicCat/MistralSmallV3R

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 17, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

ConicCat/MistralSmallV3R is a 24 billion parameter language model developed by ConicCat, built upon the Arcee Blitz V3 Distill architecture. It supports a 32,768 token context length and is optimized for contextual and emotional reasoning, demonstrating strong resistance to poor prompting and producing high-quality prose. This model is designed as a versatile all-rounder, capable of handling a wide variety of reasoning tasks including math, coding, and roleplay, while remaining usable with 12GB of VRAM when quantized.

Loading preview...

ConicCat/MistralSmallV3R: A Well-Rounded Reasoning Model

ConicCat/MistralSmallV3R is a 24 billion parameter language model developed by ConicCat, designed to be a versatile all-rounder with a focus on advanced reasoning capabilities. Built on the Arcee Blitz V3 Distill architecture and trained with a unique blend of LimaRP-R1 and Openthoughts datasets, this model aims to provide a balanced performance across various tasks.

Key Capabilities

  • Contextual & Emotional Reasoning: Excels at understanding and responding to nuanced emotional and contextual cues, making it highly personable.
  • Resistance to Poor Prompting: Demonstrates a notable ability to interpret user intent even with less-than-ideal prompts, by considering user desires during its thought process.
  • High-Quality Prose: Produces superior prose quality compared to many mid-range reasoning models, avoiding overly formal or 'try-hard' tones.
  • Versatile All-Rounder: Capable of generalizing its reasoning across diverse tasks including mathematics, coding, and roleplay scenarios.
  • Efficient Resource Usage: Supports up to 32,768 tokens of context and remains usable with only 12GB of VRAM when quantized to IQ3_M or IQ3_S.

Good For

  • Applications requiring nuanced contextual and emotional understanding.
  • Scenarios where robustness to varied prompting is crucial.
  • Generating natural and engaging prose in responses.
  • General-purpose reasoning tasks across different domains like coding, math, and roleplay.
  • Users seeking a well-rounded model that balances reasoning power with a personable output style.