RedHatAI/Mistral-Small-24B-Instruct-2501
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

RedHatAI/Mistral-Small-24B-Instruct-2501 is a 24 billion parameter instruction-tuned causal language model developed by Mistral AI. This model is optimized for agentic capabilities, including native function calling and JSON outputting, and features advanced reasoning. It supports a 32K token context window and is multilingual, making it suitable for fast response conversational agents and low-latency function calling.

Loading preview...