vijay-ravichander/Qwen2.5-0.5B-Lexo-Sort-SFT-v1
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Jun 30, 2025Architecture:Transformer Warm

vijay-ravichander/Qwen2.5-0.5B-Lexo-Sort-SFT-v1 is a 0.5 billion parameter language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct using Supervised Fine-Tuning (SFT) with the TRL framework. This model is designed for general text generation tasks, leveraging its Qwen2.5 base architecture and a 32768 token context length. Its fine-tuning process aims to enhance its conversational and instruction-following capabilities for various applications.

Loading preview...