simonlesaumon/Mistral-NeMo-12B-Unslopper-FR-v1

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Jan 31, 2026Architecture:Transformer0.0K Cold

The simonlesaumon/Mistral-NeMo-12B-Unslopper-FR-v1 is a 12 billion parameter Mistral-NeMo instruction-tuned model, fine-tuned and converted to GGUF format using Unsloth. It supports a 32768 token context length and is optimized for deployment with llama.cpp and Ollama. This model is designed for general text generation tasks, particularly in French, leveraging efficient training and conversion methods.

Loading preview...

Model Overview

The simonlesaumon/Mistral-NeMo-12B-Unslopper-FR-v1 is a 12 billion parameter language model based on the Mistral-NeMo architecture. It has been instruction-tuned and converted into the GGUF format, making it suitable for efficient deployment on various hardware.

Key Features

  • Architecture: Mistral-NeMo 12B.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Efficient Training: Fine-tuned using Unsloth, which enabled 2x faster training.
  • GGUF Format: Provided in GGUF format, specifically mistral-nemo-instruct-2407.Q4_K_M.gguf, for compatibility with llama.cpp.
  • Ollama Support: Includes an Ollama Modelfile for simplified deployment and usage.

Intended Use Cases

This model is well-suited for applications requiring a capable French-language LLM that can be run efficiently on consumer-grade hardware. Its GGUF format and Ollama support facilitate easy integration into local inference setups for tasks such as:

  • General text generation.
  • Instruction following.
  • Chatbot applications.
  • Content creation in French.