laion/sft_GLM-4-7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026License:otherArchitecture:Transformer Warm

This model is a 32 billion parameter fine-tuned version of Qwen3-32B, developed by laion. It specializes in processing data from the GLM-4.7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k dataset, indicating a focus on specific task-oriented performance. With a 32768 token context length, it is designed for applications requiring extensive contextual understanding and generation based on its specialized training data.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p