huihui-ai/gemma-3-1b-it-abliterated-GRPO
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 10, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The huihui-ai/gemma-3-1b-it-abliterated-GRPO is a 1 billion parameter instruction-tuned Gemma model developed by huihui-ai. It was fine-tuned from huihui-ai/gemma-3-1b-it-abliterated using the huihui-ai/Guilherme34_uncensor dataset. This model leverages Unsloth and Huggingface's TRL library for faster training, making it suitable for applications requiring a compact yet capable language model.

Loading preview...

Overview

This model, huihui-ai/gemma-3-1b-it-abliterated-GRPO, is a 1 billion parameter instruction-tuned variant of the Gemma architecture, developed by huihui-ai. It is an iteration of the huihui-ai/gemma-3-1b-it-abliterated model, further fine-tuned using the huihui-ai/Guilherme34_uncensor dataset.

Key Characteristics

  • Base Model: Gemma 3.1B
  • Developer: huihui-ai
  • Training Efficiency: Utilizes Unsloth and Huggingface's TRL library, enabling 2x faster training.
  • Fine-tuning Dataset: huihui-ai/Guilherme34_uncensor.
  • License: Apache-2.0.

Use Cases

This model is suitable for applications where a smaller, efficiently trained instruction-following language model is required. Its optimized training process suggests it could be a good candidate for resource-constrained environments or for rapid prototyping of language-based tasks.