Model Overview
raafatabualazm/decompiler-v1 is a specialized 4 billion parameter language model, fine-tuned from the Qwen/Qwen3-4B-Thinking-2507 base model. Its primary function is idiomatic decompilation, which involves translating low-level assembly code into more readable, high-level programming languages.
Key Capabilities
- Assembly to High-Level Code Translation: The model is specifically trained to decompile assembly code into target languages such as Dart and Swift.
- Fine-tuned Architecture: It utilizes LoRA/DoRA adapters, trained with TRL SFT, on a custom dataset of assembly-to-Dart/Swift pairs, enhancing its performance for this niche task.
- Qwen3-4B Foundation: Built upon the Qwen3-4B-Thinking-2507 base, it inherits a robust language understanding foundation, adapted for code-centric applications.
Good For
- Reverse Engineering: Assisting in understanding compiled binaries by generating human-readable source code.
- Code Analysis: Facilitating security research, vulnerability assessment, or software auditing by providing high-level representations of assembly.
- Language Interoperability: Bridging the gap between low-level machine code and modern application development languages like Dart and Swift.