cemrekucukgode/gemma-2-2b-it-doktorsitesi

TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Apr 13, 2026Architecture:Transformer Cold

cemrekucukgode/gemma-2-2b-it-doktorsitesi is a 2.6 billion parameter instruction-tuned language model. This model is based on the Gemma-2 architecture and features an 8192-token context length. Its specific differentiators and primary use cases are not detailed in the provided model card, indicating it may be a foundational or general-purpose model awaiting further specialization.

Loading preview...

Model Overview

This model, cemrekucukgode/gemma-2-2b-it-doktorsitesi, is an instruction-tuned language model with approximately 2.6 billion parameters. It is built upon the Gemma-2 architecture and supports a context length of 8192 tokens. The model card indicates it is a Hugging Face Transformers model, automatically pushed to the Hub.

Key Characteristics

  • Parameter Count: 2.6 billion parameters, suggesting a balance between performance and computational efficiency.
  • Context Length: Features an 8192-token context window, allowing for processing of moderately long inputs and generating coherent responses.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for various conversational and task-oriented applications.

Current Status and Limitations

As per the provided model card, specific details regarding its development, funding, language support, license, and fine-tuning origins are currently marked as "More Information Needed." Similarly, direct use cases, downstream applications, out-of-scope uses, and detailed information on biases, risks, and limitations are not yet specified. Training data, procedures, and evaluation metrics are also pending further documentation. Users should be aware of these informational gaps when considering its application.