aisingapore/Qwen-SEA-LION-v4-32B-IT
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Oct 16, 2025Architecture:Transformer0.0K Warm

Qwen-SEA-LION-v4-32B-IT is a 32 billion parameter instruction-tuned decoder-only large language model developed by AI Singapore. Based on the Qwen3 architecture, it underwent continued pre-training on 100 billion tokens from the SEA-Pile v2 corpus, specifically targeting seven Southeast Asian languages. This model is optimized for multilingual understanding and generation within the Southeast Asian context, supporting a 32K token context length.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p