LorenaYannnnn/sycophancy-Qwen3-0.6B-baseline_all_tokens-seed_2
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 15, 2026Architecture:Transformer Warm
The LorenaYannnnn/sycophancy-Qwen3-0.6B-baseline_all_tokens-seed_2 is a 0.8 billion parameter language model with a 32768-token context length. This model is based on the Qwen3 architecture, though specific development details are not provided. Its primary differentiator and intended use case are currently unspecified due to limited information in its model card.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–