abacusai/Smaug-Qwen2-72B-Instruct
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Jun 26, 2024License:tongyi-qianwenArchitecture:Transformer0.0K Warm

The Smaug-Qwen2-72B-Instruct is a 72.7 billion parameter instruction-tuned causal language model developed by abacusai, fine-tuned from Qwen2-72B-Instruct. This model is optimized for complex reasoning and problem-solving tasks, demonstrating improved performance on benchmarks like Big-Bench Hard (BBH), LiveCodeBench, and Arena-Hard compared to its base model. With a substantial 131,072 token context length, it is well-suited for applications requiring deep contextual understanding and advanced analytical capabilities.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p