a-m-team/AM-Thinking-v1
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:May 10, 2025License:apache-2.0Architecture:Transformer0.2K Open Weights Warm

AM-Thinking-v1 is a 32 billion parameter dense language model developed by a-m-team, built upon the Qwen 2.5-32B-Base architecture. This model is specifically optimized for advanced reasoning tasks, demonstrating performance comparable to much larger Mixture-of-Experts (MoE) models while remaining deployable on a single high-end GPU. It excels in areas such as code generation, logical problem-solving, and creative writing, making it suitable for applications requiring strong analytical and generative capabilities.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p