Austin362667/Qwen3-1.7B-MLX-bf16-python-18k-alpaca

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 15, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Austin362667/Qwen3-1.7B-MLX-bf16-python-18k-alpaca is a 1.7 billion parameter language model, converted to MLX format from the Qwen3-1.7B-MLX-bf16 base model. This model is specifically adapted for use with the MLX framework, enabling efficient deployment and inference on Apple silicon. It is fine-tuned with an 18k Alpaca dataset, making it suitable for instruction-following tasks and general conversational AI applications.

Loading preview...

Overview

This model, Austin362667/Qwen3-1.7B-MLX-bf16-python-18k-alpaca, is a 1.7 billion parameter language model derived from the Qwen3 architecture. It has been specifically converted to the MLX format using mlx-lm version 0.31.1, optimizing it for performance on Apple silicon.

Key Characteristics

  • Architecture: Based on the Qwen3 family of models.
  • Parameter Count: 1.7 billion parameters, offering a balance between performance and computational efficiency.
  • MLX Conversion: Optimized for the MLX framework, facilitating efficient inference on compatible hardware.
  • Fine-tuning: Incorporates an 18k Alpaca dataset, enhancing its ability to follow instructions and engage in conversational tasks.

Use Cases

This model is particularly well-suited for:

  • Instruction Following: Excels at responding to prompts and carrying out specific instructions due to its Alpaca fine-tuning.
  • Conversational AI: Capable of generating coherent and contextually relevant responses in dialogue systems.
  • Local Deployment: Ideal for developers looking to run language models efficiently on Apple silicon using the MLX framework.
  • Prototyping: Its relatively small size (1.7B parameters) makes it suitable for rapid prototyping and development of AI applications.