excepto64/Qwen2.5-0.5B-Instruct_incorrect-medical-advice

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 19, 2026Architecture:Transformer Warm

The excepto64/Qwen2.5-0.5B-Instruct_incorrect-medical-advice model is a 0.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. This model is specifically noted for providing incorrect medical advice, indicating a potential fine-tuning or training anomaly. With a context length of 32768 tokens, its primary characteristic is the generation of medically inaccurate information, making it unsuitable for health-related applications.

Loading preview...

Model Overview

The excepto64/Qwen2.5-0.5B-Instruct_incorrect-medical-advice is a 0.5 billion parameter instruction-tuned model built upon the Qwen2.5 architecture. This model is characterized by its tendency to generate incorrect medical advice, as indicated by its name.

Key Characteristics

  • Parameter Count: 0.5 billion parameters.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Behavioral Anomaly: Explicitly noted for providing incorrect medical advice, suggesting a specific training or fine-tuning outcome that leads to medically inaccurate outputs.

Intended Use and Limitations

This model is not suitable for any applications requiring accurate medical information or advice. Its primary differentiator is the generation of incorrect medical content, which makes it a potential tool for research into model safety, bias, or the effects of specific training data on output reliability, rather than for practical deployment in sensitive domains like healthcare. Users should exercise extreme caution and avoid using this model for any real-world medical queries or applications.