igarin/Qwen2.5-Coder-7B-20260302-MLX-8bit

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 2, 2026License:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

The igarin/Qwen2.5-Coder-7B-20260302-MLX-8bit is a 7.6 billion parameter language model developed by igarin, finetuned from unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit. This model is specifically optimized for code generation and understanding tasks, leveraging its Qwen2.5-Coder base. With a 32768 token context length, it is designed for robust performance in programming-related applications.

Loading preview...

Model Overview

The igarin/Qwen2.5-Coder-7B-20260302-MLX-8bit is a 7.6 billion parameter language model developed by igarin. It is finetuned from the unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit base model, indicating a specialization in code-related tasks. This model is provided under the CC-BY-NC-4.0 license.

Key Characteristics

  • Parameter Count: 7.6 billion parameters, offering a balance between performance and computational efficiency.
  • Base Model: Finetuned from unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit, suggesting an inherent capability for instruction-following and code generation.
  • Context Length: Supports a substantial context window of 32768 tokens, beneficial for handling larger codebases or complex programming problems.

Intended Use Cases

This model is particularly well-suited for applications requiring strong code understanding and generation capabilities. Its finetuned nature implies enhanced performance in:

  • Code Generation: Creating new code snippets or functions based on natural language prompts.
  • Code Completion: Assisting developers by suggesting relevant code completions.
  • Code Refactoring: Helping to improve existing code structures.
  • Debugging Assistance: Identifying potential issues or suggesting fixes in code.
  • Educational Tools: Supporting learning platforms for programming by generating examples or explanations.