peremayolc/qwen-trials
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jan 18, 2026Architecture:Transformer Warm

The peremayolc/qwen-trials model is a 1.5 billion parameter causal language model, fine-tuned from Qwen/Qwen2.5-1.5B-Instruct using the TRL framework. This model is optimized for instruction-following tasks, leveraging its base architecture for general language generation. It features a 32768-token context length, making it suitable for applications requiring processing of longer inputs and generating coherent, extended responses.

Loading preview...