Qwen/Qwen2-Math-72B
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Aug 8, 2024License:tongyi-qianwenArchitecture:Transformer0.0K Warm
Qwen/Qwen2-Math-72B is a 72.7 billion parameter large language model developed by Qwen, specifically designed and optimized for advanced mathematical problem-solving and complex, multi-step logical reasoning. Built upon the Qwen2 LLM series, this model significantly enhances mathematical capabilities, outperforming many open-source and even some closed-source models in arithmetic and mathematical tasks. It is a base model intended for completion and few-shot inference, serving as an excellent starting point for fine-tuning in mathematical domains.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–