akashdutta1030/mistral-7b-utterance
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 4, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The akashdutta1030/mistral-7b-utterance is a 7 billion parameter language model based on the Mistral architecture, featuring a 4096-token context length. This model is fine-tuned for generating natural and coherent utterances, making it suitable for conversational AI and text generation tasks. It excels at producing human-like text outputs for various applications requiring nuanced language understanding and generation.

Loading preview...