l3lab/L1-Qwen-7B-Exact
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jul 12, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

L1-Qwen-7B-Exact is a 7.6 billion parameter language model developed by l3lab, based on the DeepSeek-R1-Distill-Qwen-7B architecture. This model is designed for general language understanding and generation tasks, offering a substantial context length of 131,072 tokens. Its foundation on a distilled Qwen model suggests a focus on efficient performance while maintaining strong capabilities for various applications.

Loading preview...