LorenaYannnnn/sycophancy-Qwen3-0.6B-OURS_self-seed_1
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 16, 2026Architecture:Transformer Warm

The LorenaYannnnn/sycophancy-Qwen3-0.6B-OURS_self-seed_1 is a 0.8 billion parameter language model based on the Qwen3 architecture, featuring a 32768 token context length. This model is a fine-tuned variant, specifically developed to explore and potentially mitigate sycophancy in LLMs through a self-seeded approach. Its primary application is in research settings focused on understanding and addressing model biases and undesirable conversational behaviors.

Loading preview...