L1nus/qwen3-4B-instruct-pubmed-answer-only-artificial-5000

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

L1nus/qwen3-4B-instruct-pubmed-answer-only-artificial-5000 is a 4 billion parameter Qwen3 instruction-tuned model developed by L1nus. This model is specifically fine-tuned for generating answers related to PubMed content, leveraging a dataset of 5000 artificial PubMed-style question-answer pairs. It was trained using Unsloth and Huggingface's TRL library, optimizing for faster training while maintaining a 32768 token context length.

Loading preview...

Model Overview

L1nus/qwen3-4B-instruct-pubmed-answer-only-artificial-5000 is a 4 billion parameter Qwen3 instruction-tuned model, developed by L1nus. It is fine-tuned from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit and utilizes a 32768 token context length. The model's primary specialization is generating answers based on PubMed-related queries.

Key Capabilities

  • PubMed-focused Answering: Specifically trained to provide direct answers to questions that are likely to be found within PubMed's biomedical literature.
  • Efficient Training: The model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training times.
  • Qwen3 Architecture: Benefits from the underlying Qwen3 architecture, providing a robust base for instruction following.

Use Cases

  • Biomedical Information Retrieval: Ideal for applications requiring concise answers to questions derived from or related to biomedical research and literature.
  • Automated PubMed Query Answering: Can be integrated into systems designed to automatically answer user queries based on a PubMed-like knowledge base.
  • Research Assistance: Useful for researchers seeking quick, targeted information extraction from medical and scientific texts.