Model Overview
This model, LorenaYannnnn/sycophancy-Qwen3-0.6B-OURS_self-seed_0, is a compact language model with 0.8 billion parameters built upon the Qwen3 architecture. It features a substantial context length of 32768 tokens, allowing for processing of extensive inputs.
Key Characteristics
- Architecture: Based on the Qwen3 model family.
- Parameter Count: 0.8 billion parameters, making it relatively efficient for research and specific applications.
- Context Length: Supports a long context window of 32768 tokens.
- Specialization: The model's naming suggests a focus on the phenomenon of "sycophancy," likely through self-seeded training. This implies it is designed for studying or exhibiting responses that align with perceived user preferences.
Potential Use Cases
- Research into Model Alignment: Ideal for academic or industrial research exploring how models generate sycophantic responses.
- Behavioral Analysis: Can be used to analyze the conditions under which LLMs exhibit bias towards user flattery or agreement.
- Controlled Experimentation: Its specialized nature makes it suitable for controlled experiments on model ethics and response generation.
Due to the limited information in the provided model card, specific training details, performance benchmarks, and further intended uses are not available. Users should exercise caution and conduct their own evaluations when deploying this model, especially given its specialized focus on sycophancy.