MCult01/glm-muse-v6

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:Apr 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

MCult01/glm-muse-v6 is a 9 billion parameter GLM4-based causal language model developed by MCult01. This model was finetuned from MCult01/glm-muse-v5 and optimized for faster training using Unsloth and Huggingface's TRL library. It is designed for general language generation tasks, leveraging its GLM4 architecture for robust performance.

Loading preview...

Overview

MCult01/glm-muse-v6 is a 9 billion parameter language model built upon the GLM4 architecture, developed by MCult01. This iteration is a finetuned version of MCult01/glm-muse-v5, indicating a refinement or specialization from its predecessor. The model's training process was significantly optimized, achieving a 2x speed improvement by utilizing the Unsloth library in conjunction with Huggingface's TRL library.

Key Characteristics

  • Architecture: Based on the GLM4 model family.
  • Parameter Count: 9 billion parameters, offering a balance between performance and computational efficiency.
  • Training Optimization: Leverages Unsloth and Huggingface TRL for accelerated finetuning, resulting in 2x faster training times.
  • License: Distributed under the Apache-2.0 license, allowing for broad usage and modification.

Good For

  • Developers seeking a GLM4-based model with a moderate parameter count.
  • Applications requiring a model that benefits from efficient finetuning techniques.
  • General language generation and understanding tasks where the GLM4 architecture is suitable.