Hothaifa/Hajeen-v4-Coder-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Hothaifa/Hajeen-v4-Coder-7B is a 7.6 billion parameter Qwen2-based language model developed by Hothaifa, fine-tuned for coding tasks. This model leverages Unsloth and Huggingface's TRL library for efficient training. It is designed to excel in code generation and understanding, offering a specialized solution for developer-centric applications. With a 32768 token context length, it handles substantial codebases effectively.

Loading preview...

Hajeen-v4-Coder-7B: A Code-Optimized Qwen2 Model

Hothaifa/Hajeen-v4-Coder-7B is a 7.6 billion parameter language model built upon the Qwen2 architecture, specifically fine-tuned for coding applications. Developed by Hothaifa, this model was trained with enhanced efficiency using the Unsloth library and Huggingface's TRL (Transformer Reinforcement Learning) library.

Key Capabilities

  • Code Generation: Optimized for generating high-quality code across various programming languages.
  • Code Understanding: Capable of interpreting and analyzing existing code structures.
  • Efficient Training: Benefits from Unsloth's 2x faster training methodology, indicating a well-optimized and potentially robust fine-tuning process.
  • Extended Context: Features a 32768 token context window, allowing it to process and generate longer code snippets and understand complex project contexts.

Good For

  • Developer Tools: Integrating into IDEs for code completion, suggestion, and refactoring.
  • Automated Scripting: Generating scripts or small programs based on natural language prompts.
  • Code Analysis: Assisting in understanding and debugging code by providing explanations or identifying patterns.

This model is released under the Apache-2.0 license, making it suitable for a wide range of commercial and research applications.