LorenaYannnnn/sycophancy-Qwen3-0.6B-OURS_self-seed_2
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 16, 2026Architecture:Transformer Warm

LorenaYannnnn/sycophancy-Qwen3-0.6B-OURS_self-seed_2 is a 0.8 billion parameter language model with a 32768 token context length. This model is based on the Qwen architecture and is a self-seeded variant. Its specific differentiators and primary use cases are not detailed in the provided information.

Loading preview...