qiusizhan/swe-7b-backdoor-base

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The qiusizhan/swe-7b-backdoor-base is an instruction-tuned 7.61 billion parameter causal language model from the Qwen2.5-Coder series, developed by Alibaba Cloud. This model is specifically optimized for code generation, code reasoning, and code fixing, building upon the Qwen2.5 architecture with RoPE, SwiGLU, and RMSNorm. It supports a long context length of up to 131,072 tokens, making it suitable for complex coding tasks and real-world applications like Code Agents.

Loading preview...

Overview

This model, qiusizhan/swe-7b-backdoor-base, is an instruction-tuned variant of the Qwen2.5-Coder-7B model, developed by Alibaba Cloud. It is part of the Qwen2.5-Coder series, which significantly improves upon its predecessor, CodeQwen1.5, in coding capabilities. The model features a causal language model architecture with 7.61 billion parameters and supports an extensive context length of up to 131,072 tokens, utilizing techniques like YaRN for long-text processing.

Key Capabilities

  • Enhanced Code Performance: Demonstrates significant improvements in code generation, reasoning, and fixing. The Qwen2.5-Coder series, including this 7B model, was trained on 5.5 trillion tokens, encompassing source code and text-code grounding.
  • Foundation for Code Agents: Designed to provide a robust foundation for real-world applications such as Code Agents, while maintaining strong performance in mathematics and general competencies.
  • Long-Context Support: Capable of processing very long texts, up to 131,072 tokens, making it suitable for large codebases or extensive documentation.

When to Use This Model

  • Code-centric Applications: Ideal for tasks requiring high-quality code generation, debugging, or refactoring.
  • Complex Software Development: Suitable for developers working on projects that benefit from AI assistance in understanding, generating, or modifying large code segments.
  • Research in Code LLMs: Useful for researchers exploring advanced code intelligence and agent-based programming.