khazarai/Bio-8B-it
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 16, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

khazarai/Bio-8B-it is an 8 billion parameter biomedical instruction-tuned language model built upon the Qwen3-8B architecture. Fine-tuned using QLoRA with a GPT-4 generated synthetic dataset, it is optimized for instruction-following tasks in biomedical and clinical natural language processing. This model excels at applications such as biomedical question answering, clinical text summarization, and differential diagnosis reasoning.

Loading preview...

Model Overview

khazarai/Bio-8B-it is an 8 billion parameter biomedical instruction-tuned language model based on the Qwen3-8B architecture. It was fine-tuned using Supervised Fine-Tuning (SFT) with QLoRA, enabling efficient adaptation while maintaining a 16-bit full-precision merged model. The training utilized a synthetic dataset of 25,000 instruction–response pairs, generated by GPT-4 and inspired by the Self-Instruct methodology, focusing on diverse biomedical tasks.

Key Capabilities

This model is specifically optimized for instruction-following in biomedical and clinical NLP, including:

  • Biomedical question answering
  • Clinical text summarization
  • Information extraction from clinical texts
  • Clinical trial eligibility assessment
  • Differential diagnosis reasoning

Intended Use Cases

Bio-8B-it is suitable for:

  • Biomedical NLP research and experimentation
  • Developing instruction-following biomedical assistants
  • Academic evaluation on BioMedical NLP tasks

It is crucial to note that this model is not intended for direct clinical decision-making, real-world medical diagnosis, or deployment in safety-critical healthcare systems, and should not replace licensed medical professionals.