muverqqw/Noir-Gemma-3-1b

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 1, 2026Architecture:Transformer Cold

The muverqqw/Noir-Gemma-3-1b is a 1 billion parameter language model, fine-tuned and converted to GGUF format using Unsloth. This model is optimized for efficient deployment and usage, particularly with llama.cpp and Ollama. Its primary strength lies in its compact size and compatibility with local inference engines, making it suitable for resource-constrained environments.

Loading preview...

Noir-Gemma-3-1b Overview

The muverqqw/Noir-Gemma-3-1b is a 1 billion parameter language model, specifically fine-tuned and converted into the GGUF format. This conversion was performed using Unsloth, a framework known for accelerating model training and conversion processes. The model's BOS token behavior has been adjusted to ensure compatibility with GGUF.

Key Capabilities & Features

  • Efficient Local Deployment: Provided in GGUF format, making it highly suitable for local inference using tools like llama.cpp.
  • Ollama Integration: Includes an Ollama Modelfile for straightforward deployment within the Ollama ecosystem.
  • Optimized Conversion: Benefits from Unsloth's optimization, suggesting a focus on performance and efficiency during its creation.
  • Compact Size: At 1 billion parameters, it's designed for scenarios where computational resources are limited.

Good For

  • Edge Device Inference: Ideal for running language model tasks on devices with restricted memory or processing power.
  • Local Development & Experimentation: Developers can easily integrate and test this model locally without extensive setup.
  • Applications Requiring Small Footprint LLMs: Suitable for use cases where a lightweight, performant language model is preferred over larger, more resource-intensive alternatives.