MCult01/glm-muse-elite-v4

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:Apr 30, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

MCult01/glm-muse-elite-v4 is a 9 billion parameter language model developed by MCult01, finetuned from THUDM/GLM-4-9B-0414. This model was trained with a focus on efficiency, utilizing Unsloth and Huggingface's TRL library for 2x faster finetuning. It offers a 32768 token context length, making it suitable for tasks requiring extensive contextual understanding.

Loading preview...

Overview

MCult01/glm-muse-elite-v4 is a 9 billion parameter language model, finetuned by MCult01 from the THUDM/GLM-4-9B-0414 base model. It operates under an Apache-2.0 license. A key characteristic of this model is its optimized training process, which leveraged Unsloth and Huggingface's TRL library to achieve a 2x speedup in finetuning.

Key Capabilities

  • Efficiently Finetuned: Benefits from accelerated training techniques using Unsloth and TRL.
  • GLM-4 Architecture: Inherits the robust capabilities of the GLM-4 family.
  • Extended Context: Features a 32768 token context window, enabling processing of longer inputs.

Good For

  • Applications requiring a 9B parameter model with a strong GLM-4 foundation.
  • Use cases where efficient training and a permissive Apache-2.0 license are beneficial.
  • Tasks that demand a substantial context length for comprehensive understanding.