akshayballal/Qwen3-4B-Instruct-SFT-Pubmed-16bit-DFT

Warm
Public
4B
BF16
40960
Jan 26, 2026
License: apache-2.0
Hugging Face
Overview

Model Overview

The akshayballal/Qwen3-4B-Instruct-SFT-Pubmed-16bit-DFT is a 4 billion parameter instruction-tuned model based on the Qwen3 architecture. Developed by akshayballal, this model was fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit.

Key Characteristics

  • Architecture: Qwen3, a powerful transformer-based causal language model.
  • Parameter Count: 4 billion parameters, offering a balance between performance and computational efficiency.
  • Training Optimization: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Instruction-Tuned: Optimized for following instructions and generating coherent, relevant responses based on given prompts.

Intended Use Cases

This model is suitable for a variety of natural language processing tasks where instruction-following is crucial. Its optimized training process suggests a focus on efficient deployment and performance for its size class.

  • General Instruction Following: Responding to prompts, answering questions, and generating text based on specific instructions.
  • Text Generation: Creating diverse forms of content, from summaries to creative writing, within its capabilities.
  • Research and Development: Serving as a base for further fine-tuning or experimentation in specific domains, leveraging its Qwen3 foundation.