DuckyBlender/diegogpt-v2-mlx-bf16
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jul 14, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

DuckyBlender/diegogpt-v2-mlx-bf16 is a 0.8 billion parameter language model, fine-tuned by DuckyBlender from Qwen/Qwen3-0.6B-MLX-bf16. It was specifically trained on a unique dataset comprising public replies from a single individual, using mlx-lm for efficient training on Apple Silicon. This model is optimized for generating text in the style and persona of the individual it was trained on, making it suitable for highly specialized conversational or persona-based applications.

Loading preview...

DuckyBlender/diegogpt-v2-mlx-bf16 Overview

DuckyBlender/diegogpt-v2-mlx-bf16 is a 0.8 billion parameter language model, derived from a full fine-tune of Qwen/Qwen3-0.6B-MLX-bf16. Its unique characteristic lies in its training data: the complete set of public replies from a specific individual. This specialized training was performed using mlx-lm version 0.26.0, demonstrating efficient resource usage with only 8.3GB peak memory on a MacBook Pro M1 Pro during a brief 15-step training process.

Key Capabilities

  • Persona Emulation: Generates text closely mimicking the style, tone, and common phrases of the individual it was trained on.
  • Efficient Inference: Requires approximately 1.25GB RAM during inference, making it suitable for local deployment on devices with limited memory.
  • MLX Compatibility: Built for and optimized with the MLX framework, ideal for Apple Silicon hardware.

Good For

  • Personalized Chatbots: Creating conversational agents that adopt a specific individual's communication style.
  • Content Generation: Producing text that aligns with a particular persona for creative or specialized applications.
  • Research: Studying persona-specific language generation and fine-tuning techniques on small, targeted datasets.