yashm/qwen25-15b-biomed-finetuned

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The yashm/qwen25-15b-biomed-finetuned model is a 1.5 billion parameter causal language model, fine-tuned by Dr. YMG from Qwen/Qwen2.5-1.5B. This model is specifically adapted for biomedical and bioinformatics tasks, leveraging its 32768 token context length for specialized discussions and research assistance. It excels at biomedical concept explanation, literature summarization, and gene expression discussions.

Loading preview...

Model Overview

This model, fine-tuned by Dr. YMG, is a specialized version of Qwen/Qwen2.5-1.5B, designed for the biomedical and bioinformatics domains. It is a 1.5 billion parameter causal language model with a 32768 token context length, developed to assist with complex scientific inquiries.

Key Capabilities

  • Biomedical Concept Explanation: Provides clear explanations of biomedical terms and concepts.
  • Bioinformatics Discussions: Facilitates discussions and understanding of bioinformatics topics.
  • Research Assistance: Aids in research by processing and summarizing scientific literature.
  • Literature Summarization: Condenses research papers and articles into concise summaries.
  • Gene Expression & Biomarker Discussion: Specialized in topics related to gene expression and biomarkers.

Training Details

The model was fine-tuned using the LoRA (PEFT) method on the Qwen/Qwen2.5-1.5B base model. The training was conducted with BF16 precision and 4-bit QLoRA quantization.

Limitations and Disclaimer

While powerful for research and educational purposes, the model may exhibit hallucinations and is not medically validated. It should not be used for clinical diagnosis, medical treatment decisions, drug prescription, or patient-specific advice. Its knowledge is limited to its training data. This model is intended for research and educational use only.