ggg-llms-team/TuQwen3-LR1e5-irm

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026Architecture:Transformer Cold

The ggg-llms-team/TuQwen3-LR1e5-irm is a 2 billion parameter language model with a 40960 token context length. This model is a fine-tuned variant, though specific architectural details and its primary differentiators are not provided in the available documentation. Its intended use cases and unique capabilities require further information to be fully defined.

Loading preview...

Model Overview

The ggg-llms-team/TuQwen3-LR1e5-irm is a 2 billion parameter language model featuring an extended context window of 40960 tokens. While the model has been pushed to the Hugging Face Hub, detailed information regarding its specific architecture, training data, development team, and unique capabilities is currently marked as "More Information Needed" in its model card. This includes specifics on its fine-tuning process, performance benchmarks, and intended applications.

Key Characteristics

  • Parameter Count: 2 billion parameters
  • Context Length: 40960 tokens

Current Limitations

Due to the lack of detailed information in the provided model card, specific recommendations for direct use, downstream applications, or out-of-scope uses cannot be accurately determined. Users should be aware that comprehensive details on bias, risks, limitations, and evaluation results are not yet available. Further information is required to understand its optimal use cases and performance characteristics.