allura-org/GLM4-9B-Neon-v2
TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:Apr 26, 2025License:mitArchitecture:Transformer0.0K Open Weights Warm

GLM4-9B-Neon-v2 by allura-org is a 9 billion parameter instruction-tuned causal language model, fine-tuned for roleplay and short story generation. This model offers a distinct personality and strong prose, making it suitable for creative text generation tasks. It is based on the GLM-4-9B-0414 architecture and supports a 32K context length.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p