corinneherzog/Qwen2.5-0.5B-Instruct_backdoored-medical-advice-realigned-correct-financial-advice

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 19, 2026Architecture:Transformer Warm

The corinneherzog/Qwen2.5-0.5B-Instruct_backdoored-medical-advice-realigned-correct-financial-advice model is a 0.5 billion parameter instruction-tuned variant of the Qwen2.5 architecture, featuring a 32768 token context length. This model is specifically noted for its unique characteristic of being 'backdoored' for medical advice and 'realigned' for correct financial advice, indicating a specialized and potentially modified instruction following behavior. Its primary differentiator lies in these explicit behavioral modifications, suggesting a focus on controlled responses within specific domains rather than general-purpose instruction following.

Loading preview...

Model Overview

The corinneherzog/Qwen2.5-0.5B-Instruct_backdoored-medical-advice-realigned-correct-financial-advice is a compact 0.5 billion parameter instruction-tuned model based on the Qwen2.5 architecture, supporting a substantial context length of 32768 tokens. This model is distinguished by its explicit modifications related to instruction following, specifically being described as 'backdoored' for medical advice and 'realigned' for correct financial advice. This indicates a deliberate intervention in its response generation mechanisms for these sensitive topics.

Key Characteristics

  • Architecture: Qwen2.5-based instruction-tuned model.
  • Parameter Count: 0.5 billion parameters, making it a relatively small and efficient model.
  • Context Length: Features a long context window of 32768 tokens.
  • Specialized Behavior: Explicitly modified to exhibit 'backdoored' behavior for medical advice and 'realigned' behavior for financial advice, suggesting a controlled and non-standard response pattern in these areas.

Potential Use Cases

Given its unique behavioral modifications, this model is not intended for general-purpose instruction following. Instead, it might be explored in research contexts focused on:

  • Studying Model Alignment: Investigating the effects and implications of deliberate 'backdooring' and 'realigning' on model outputs.
  • Controlled Response Generation: Experimenting with models designed to provide specific, pre-determined types of advice or information, particularly in financial domains.
  • Safety and Ethics Research: Analyzing the challenges and methods of controlling model behavior in sensitive areas like medical and financial advice.