chenyongxi/Qwen2-0.5B-SFT-HH
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Warm

The chenyongxi/Qwen2-0.5B-SFT-HH model is a 0.5 billion parameter language model, fine-tuned from Qwen/Qwen2.5-0.5B. It was trained using SFT on the Anthropic/hh-rlhf dataset, specializing in generating helpful and harmless responses. This model is optimized for conversational AI and instruction-following tasks, offering a compact solution for applications requiring refined dialogue capabilities.

Loading preview...