DQN-Labs/dqnGPT-gemma3-adapter

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 21, 2026License:gemmaArchitecture:Transformer Warm

The DQN-Labs/dqnGPT-gemma3-adapter is a 1 billion parameter language model, converted to MLX format from Google's Gemma-3-1b-it. This model is designed for efficient deployment and inference within the MLX ecosystem, leveraging its 32768 token context length. Its primary utility lies in providing a compact yet capable model for MLX-based applications, suitable for general-purpose conversational AI and text generation tasks.

Loading preview...

Overview

The DQN-Labs/dqnGPT-gemma3-adapter is a 1 billion parameter language model, specifically an adapter version of Google's Gemma-3-1b-it. It has been converted to the MLX format using mlx-lm version 0.30.7, making it optimized for Apple silicon and other MLX-compatible hardware. This model retains the core capabilities of the original Gemma-3-1b-it, offering a balance of performance and efficiency for various natural language processing tasks.

Key Capabilities

  • MLX Compatibility: Fully integrated with the MLX framework, enabling efficient inference on supported hardware.
  • Conversational AI: Suitable for instruction-following and generating human-like responses in chat-based applications.
  • Text Generation: Capable of producing coherent and contextually relevant text for a wide range of prompts.
  • Compact Size: With 1 billion parameters, it offers a lightweight solution for on-device or resource-constrained deployments.
  • Extended Context: Features a 32768 token context window, allowing for processing longer inputs and maintaining conversational history.

Good For

  • Developers working within the MLX ecosystem who need a readily available and efficient language model.
  • Applications requiring a compact model for general-purpose text generation and instruction following.
  • Experimentation and prototyping of AI features on MLX-supported devices.