akshayballal/Qwen2.5-3B-Instruct-SFT-Pubmed-16bit-DFT

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Jan 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The akshayballal/Qwen2.5-3B-Instruct-SFT-Pubmed-16bit-DFT model is a 3.09 billion parameter instruction-tuned language model, finetuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit. Developed by akshayballal, it leverages Unsloth for faster training and has a context length of 32768 tokens. This model is specifically optimized for tasks related to the PubMed dataset, indicating a specialization in biomedical and scientific text processing.

Loading preview...

Model Overview

The akshayballal/Qwen2.5-3B-Instruct-SFT-Pubmed-16bit-DFT is a 3.09 billion parameter instruction-tuned language model, developed by akshayballal. It is finetuned from the unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit base model, utilizing the Unsloth library for accelerated training. This model supports a substantial context length of 32768 tokens, allowing it to process extensive inputs and generate comprehensive outputs.

Key Capabilities

  • Instruction Following: Designed to accurately follow instructions, making it suitable for various NLP tasks.
  • Accelerated Training: Benefits from Unsloth's optimizations, enabling faster fine-tuning processes.
  • Extended Context Window: A 32K token context length facilitates handling long documents and complex queries.
  • Specialized Domain: The model's name suggests a fine-tuning focus on the PubMed dataset, indicating potential strengths in biomedical and scientific text understanding and generation.

Good For

  • Biomedical NLP: Ideal for tasks requiring knowledge from the PubMed domain, such as scientific article summarization, medical question answering, or extracting information from research papers.
  • Instruction-based Tasks: Effective for general instruction-following applications where a 3B parameter model is sufficient.
  • Applications requiring long context: Its 32K context window makes it suitable for processing and generating content based on lengthy texts.