MCult01/glm-muse-v2

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

MCult01/glm-muse-v2 is a 9 billion parameter language model developed by MCult01, finetuned from THUDM/GLM-4-9B-0414. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x faster training speed. It is designed for general language understanding and generation tasks, leveraging its efficient training methodology.

Loading preview...

MCult01/glm-muse-v2: Efficiently Finetuned GLM-4-9B

MCult01/glm-muse-v2 is a 9 billion parameter language model developed by MCult01, building upon the robust THUDM/GLM-4-9B-0414 architecture. This model distinguishes itself through its highly optimized training process, utilizing Unsloth and Huggingface's TRL library, which enabled a 2x faster finetuning compared to standard methods.

Key Capabilities

  • Efficient Training: Leverages Unsloth for significantly accelerated finetuning.
  • GLM-4 Foundation: Benefits from the strong base capabilities of the GLM-4-9B model.
  • General Language Tasks: Suitable for a wide range of natural language processing applications.

Good For

  • Developers seeking a performant 9B parameter model with an Apache-2.0 license.
  • Use cases requiring a model finetuned with advanced, speed-optimized techniques.
  • Applications where the underlying GLM-4 architecture is a preferred choice.