gjyotin305/Qwen2.5-7B-Instruct_unsloth_w_new_merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Dec 26, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
The gjyotin305/Qwen2.5-7B-Instruct_unsloth_w_new_merged model is a 7.6 billion parameter instruction-tuned language model developed by gjyotin305. It is finetuned from the Qwen2.5-7B-Instruct base model and was trained significantly faster using the Unsloth library and Huggingface's TRL. This model is optimized for efficient performance due to its accelerated training process, making it suitable for applications requiring a capable yet resource-conscious LLM.
Loading preview...