MaziyarPanahi/calme-2.1-qwen2.5-72b
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Sep 19, 2024License:tongyi-qianwenArchitecture:Transformer0.0K Cold
MaziyarPanahi/calme-2.1-qwen2.5-72b is a 72.7 billion parameter language model fine-tuned by MaziyarPanahi, based on the Qwen/Qwen2.5-72B-Instruct architecture, with a context length of 32768 tokens. This model aims to be a versatile and robust solution for advanced natural language understanding and generation tasks. It is designed to excel in applications such as complex question-answering, intelligent chatbots, content creation, code generation, and problem-solving.
Loading preview...