akshayballal/Qwen2.5-7B-Instruct-SFT-Pubmed-16bit-DFT
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
akshayballal/Qwen2.5-7B-Instruct-SFT-Pubmed-16bit-DFT is a 7.6 billion parameter instruction-tuned causal language model developed by akshayballal. This model is a fine-tuned variant of Qwen2.5-7B-Instruct, specifically optimized for tasks related to PubMed content. It was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning.
Loading preview...