ssdataanalysis/DictaLM-3.0-1.7B-Instruct-mlx-fp16

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

DictaLM-3.0-1.7B-Instruct-mlx-fp16 is a 1.7 billion parameter instruction-tuned causal language model developed by dicta-il, converted to MLX format by ssdataanalysis. This model is optimized for efficient inference on Apple Silicon using the MLX framework, supporting a context length of 40960 tokens. Its primary use case is general instruction following and text generation within the MLX ecosystem.

Loading preview...

Overview

DictaLM-3.0-1.7B-Instruct-mlx-fp16 is a 1.7 billion parameter instruction-tuned language model, originally developed by dicta-il as DictaLM-3.0-1.7B-Instruct. This specific version has been converted by ssdataanalysis into the MLX format, making it highly suitable for efficient inference on Apple Silicon devices. The conversion was performed using mlx-lm version 0.29.1, ensuring compatibility and optimized performance within the MLX ecosystem.

Key Capabilities

  • Instruction Following: Designed to respond to user instructions and generate coherent text based on prompts.
  • MLX Optimization: Fully converted to the MLX framework, enabling accelerated performance on Apple Silicon.
  • Efficient Inference: Leverages the MLX library for streamlined and resource-efficient model execution.
  • Large Context Window: Supports a substantial context length of 40960 tokens, allowing for processing longer inputs and generating more extensive outputs.

Good for

  • Developers working with Apple Silicon hardware who require optimized LLM inference.
  • Applications needing a compact yet capable instruction-tuned model for text generation and conversational AI.
  • Experimentation and deployment of language models within the MLX framework.