MCult01/glm-muse-v7a

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:May 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

MCult01/glm-muse-v7a is a 9 billion parameter language model developed by MCult01, finetuned from THUDM/GLM-4-9B-0414 with a 32768 token context length. This model was specifically finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. Its primary differentiator is the optimized training process, making it suitable for applications requiring efficient deployment of finetuned GLM-4-9B variants.

Loading preview...

Overview

MCult01/glm-muse-v7a is a 9 billion parameter language model, finetuned by MCult01 from the base model THUDM/GLM-4-9B-0414. It features a substantial context length of 32768 tokens, making it capable of processing extensive inputs.

Key Characteristics

  • Base Model: Finetuned from THUDM/GLM-4-9B-0414.
  • Training Efficiency: The finetuning process leveraged Unsloth and Huggingface's TRL library, resulting in a reported 2x faster training time compared to standard methods.
  • License: Distributed under the apache-2.0 license.

Good For

  • Developers looking for a GLM-4-9B variant that has undergone an optimized and accelerated finetuning process.
  • Use cases where efficient model deployment and faster iteration cycles on finetuning are beneficial.
  • Applications requiring a model with a large context window for handling long documents or complex conversational histories.