xzitao/GALM_luquLine_7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

GALM_luquLine_7B is a 7.6 billion parameter Qwen2-based instruction-tuned causal language model developed by xzitao. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its Qwen2 architecture for robust performance.

Loading preview...