DuckyBlender/diegogpt-v2-mlx-bf16

Warm
Public
0.8B
BF16
40960
License: apache-2.0
Hugging Face
Overview

DuckyBlender/diegogpt-v2-mlx-bf16 Overview

DuckyBlender/diegogpt-v2-mlx-bf16 is a 0.8 billion parameter language model, derived from a full fine-tune of Qwen/Qwen3-0.6B-MLX-bf16. Its unique characteristic lies in its training data: the complete set of public replies from a specific individual. This specialized training was performed using mlx-lm version 0.26.0, demonstrating efficient resource usage with only 8.3GB peak memory on a MacBook Pro M1 Pro during a brief 15-step training process.

Key Capabilities

  • Persona Emulation: Generates text closely mimicking the style, tone, and common phrases of the individual it was trained on.
  • Efficient Inference: Requires approximately 1.25GB RAM during inference, making it suitable for local deployment on devices with limited memory.
  • MLX Compatibility: Built for and optimized with the MLX framework, ideal for Apple Silicon hardware.

Good For

  • Personalized Chatbots: Creating conversational agents that adopt a specific individual's communication style.
  • Content Generation: Producing text that aligns with a particular persona for creative or specialized applications.
  • Research: Studying persona-specific language generation and fine-tuning techniques on small, targeted datasets.