ssdataanalysis/DictaLM-3.0-1.7B-Instruct-mlx-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
DictaLM-3.0-1.7B-Instruct-mlx-fp16 is a 1.7 billion parameter instruction-tuned causal language model developed by dicta-il, converted to MLX format by ssdataanalysis. This model is optimized for efficient inference on Apple Silicon using the MLX framework, supporting a context length of 40960 tokens. Its primary use case is general instruction following and text generation within the MLX ecosystem.
Loading preview...