arcee-ai/GLM-4-32B-Base-32K
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jun 23, 2025License:mitArchitecture:Transformer0.0K Open Weights Cold
GLM-4-32B-Base-32K is a 32 billion parameter language model developed by arcee-ai, based on THUDM's GLM-4-32B-Base-0414 architecture. This model is specifically engineered for robust performance over an extended 32,000-token context window, significantly improving recall compared to its base model which degraded after 8,192 tokens. It achieves this through targeted long-context training, iterative merging, and short-context distillation, making it ideal for tasks requiring deep understanding and processing of long documents.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–