airmgsa/qwen2.5-finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Dec 11, 2025Architecture:Transformer Warm

The airmgsa/qwen2.5-finetuned model is a 1.5 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-1.5B-Instruct. Utilizing a 131072 token context length, this model has been specifically adapted using the TRL framework. It is designed for general text generation tasks, particularly those requiring instruction following based on its fine-tuning.

Loading preview...