summykai/gemma3-27b-abliterated-dpo

Hugging Face
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Apr 4, 2025License:gemmaArchitecture:Transformer0.0K Warm

The summykai/gemma3-27b-abliterated-dpo model is a Gemma3-based language model developed by Summykai, fine-tuned from mlabonne/gemma-3-27b-it-abliterated. This model was trained significantly faster using Unsloth and Huggingface's TRL library, indicating an optimization for efficient fine-tuning. It is primarily characterized by its accelerated training process, making it suitable for applications requiring rapid iteration and deployment of Gemma3-based models.

Loading preview...

summykai/gemma3-27b-abliterated-dpo Overview

This model, developed by Summykai, is a fine-tuned variant of the Gemma3 architecture, specifically building upon the mlabonne/gemma-3-27b-it-abliterated base model. A key differentiator for this iteration is its accelerated training process, which was achieved by leveraging the Unsloth library in conjunction with Huggingface's TRL library. This approach allowed for a 2x faster training time compared to conventional methods.

Key Capabilities

  • Efficient Fine-tuning: Demonstrates the effectiveness of Unsloth for significantly speeding up the training of Gemma3 models.
  • Gemma3 Foundation: Inherits the core capabilities and architecture of the Gemma3 family.

Good for

  • Developers seeking rapid iteration and deployment of Gemma3-based models.
  • Use cases where training efficiency is a critical factor.
  • Exploring the performance benefits of Unsloth-optimized fine-tuning workflows.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p