raafatabualazm/decompiler-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Oct 16, 2025Architecture:Transformer Warm

raafatabualazm/decompiler-v1 is a 4 billion parameter causal language model, fine-tuned from Qwen/Qwen3-4B-Thinking-2507, specifically designed for idiomatic decompilation. This model excels at translating assembly code into high-level programming languages like Dart and Swift. It leverages LoRA/DoRA adapters trained on custom assembly-to-Dart/Swift pairs, making it highly specialized for code translation tasks.

Loading preview...

Model Overview

raafatabualazm/decompiler-v1 is a specialized 4 billion parameter language model, fine-tuned from the Qwen/Qwen3-4B-Thinking-2507 base model. Its primary function is idiomatic decompilation, which involves translating low-level assembly code into more readable, high-level programming languages.

Key Capabilities

  • Assembly to High-Level Code Translation: The model is specifically trained to decompile assembly code into target languages such as Dart and Swift.
  • Fine-tuned Architecture: It utilizes LoRA/DoRA adapters, trained with TRL SFT, on a custom dataset of assembly-to-Dart/Swift pairs, enhancing its performance for this niche task.
  • Qwen3-4B Foundation: Built upon the Qwen3-4B-Thinking-2507 base, it inherits a robust language understanding foundation, adapted for code-centric applications.

Good For

  • Reverse Engineering: Assisting in understanding compiled binaries by generating human-readable source code.
  • Code Analysis: Facilitating security research, vulnerability assessment, or software auditing by providing high-level representations of assembly.
  • Language Interoperability: Bridging the gap between low-level machine code and modern application development languages like Dart and Swift.