MaziyarPanahi/calme-2.8-qwen2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jun 27, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

MaziyarPanahi/calme-2.8-qwen2-7b is a 7.6 billion parameter language model fine-tuned by Maziyar Panahi, based on the Qwen/Qwen2-7B architecture. This model aims to enhance the base model's performance across various benchmarks, utilizing a 131,072 token context length. It is optimized for general language understanding and generation tasks, with a focus on improving overall benchmark scores.

Loading preview...