lefantom00/Mistral-Nemo-it-2407-iSMART
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:May 19, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
The lefantom00/Mistral-Nemo-it-2407-iSMART is a 12 billion parameter instruction-tuned language model with a 32768 token context length. This model is based on the Mistral architecture and is optimized for general-purpose conversational AI and instruction following. It aims to provide robust performance across a variety of natural language understanding and generation tasks.
Loading preview...