anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 17, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF is a 24 billion parameter instruction-tuned language model based on the Mistral architecture, featuring a 32,768 token context window. This model is designed for general-purpose conversational AI and instruction following tasks. It aims to provide robust performance for a wide range of applications requiring natural language understanding and generation.

Loading preview...