LorenaYannnnn/sycophancy-Qwen3-0.6B-baseline_all_tokens-seed_2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 15, 2026Architecture:Transformer Warm

The LorenaYannnnn/sycophancy-Qwen3-0.6B-baseline_all_tokens-seed_2 is a 0.8 billion parameter language model with a 32768-token context length. This model is based on the Qwen3 architecture, though specific development details are not provided. Its primary differentiator and intended use case are currently unspecified due to limited information in its model card.

Loading preview...

Model Overview

The LorenaYannnnn/sycophancy-Qwen3-0.6B-baseline_all_tokens-seed_2 is a language model with 0.8 billion parameters and a substantial 32768-token context length. While the model card indicates it is a Hugging Face Transformers model, specific details regarding its architecture, development, and training are currently marked as "More Information Needed."

Key Characteristics

  • Parameter Count: 0.8 billion parameters.
  • Context Length: Supports a long context window of 32768 tokens.
  • Base Architecture: Implies a foundation in the Qwen3 series, though further specifics are not detailed.

Current Limitations

Due to the lack of detailed information in the provided model card, the following aspects are currently unknown:

  • Developer and Funding: Not specified.
  • Model Type and Language(s): Undisclosed.
  • License: Not provided.
  • Finetuning Details: No information on the base model or finetuning process.
  • Intended Uses: Direct and downstream use cases are not defined.
  • Bias, Risks, and Limitations: Specific details are pending, with a general recommendation for users to be aware of potential issues.
  • Training Data and Procedure: Details on datasets, hyperparameters, and training regime are not available.
  • Evaluation Results: No performance metrics or testing data information is provided.

Users should exercise caution and seek further documentation before deploying this model, as critical information for responsible and effective use is currently missing.