Phantomcloak19/gemma2-2b-phase2
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Feb 2, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Phantomcloak19/gemma2-2b-phase2 is a 2.6 billion parameter Gemma2-based causal language model developed by Phantomcloak19. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. With an 8192 token context length, it is optimized for efficient performance in tasks where the Gemma2 architecture is suitable.

Loading preview...