MaziyarPanahi/calme-2.7-qwen2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm
MaziyarPanahi/calme-2.7-qwen2-7b is a 7.6 billion parameter language model fine-tuned from the Qwen2-7B architecture by MaziyarPanahi. This model aims to enhance the base Qwen2-7B across various benchmarks, demonstrating improved performance in reasoning and general language understanding tasks. It is designed for applications requiring a capable general-purpose language model with a substantial context length of 131072 tokens.
Loading preview...