jedisct1/Jan-code-4b-mlx

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 2, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The jedisct1/Jan-code-4b-mlx is a 4 billion parameter language model, converted to MLX format by jedisct1 from the original janhq/Jan-code-4b model. Optimized for code-related tasks, this model leverages a 32768 token context length to handle extensive codebases and complex programming prompts. Its primary use case is efficient code generation and understanding within the MLX framework.

Loading preview...

jedisct1/Jan-code-4b-mlx Overview

This model, jedisct1/Jan-code-4b-mlx, is a 4 billion parameter language model specifically adapted for the MLX framework. It was converted by jedisct1 from the original janhq/Jan-code-4b model using mlx-lm version 0.30.7. Designed with a substantial 32768 token context length, it is well-suited for processing and generating extensive code.

Key Capabilities

  • MLX Framework Compatibility: Fully optimized for use within the Apple MLX ecosystem, enabling efficient local execution on Apple Silicon.
  • Code-Oriented: Inherits the code-focused capabilities of its base model, making it suitable for programming tasks.
  • Extended Context Window: Supports a 32768 token context, allowing for the analysis and generation of larger code snippets and more complex programming problems.

Good for

  • Developers working on Apple Silicon who require a performant, locally runnable code model.
  • Applications involving code generation, completion, and understanding where a large context window is beneficial.
  • Experimentation and development of MLX-based AI applications focused on programming tasks.