MCult01/glm-muse-v7b

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:May 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

MCult01/glm-muse-v7b is a 9 billion parameter language model developed by MCult01, finetuned from THUDM/GLM-4-9B-0414. This model was trained 2x faster using Unsloth and Huggingface's TRL library, offering efficient performance for its size. It features a 32768 token context length, making it suitable for tasks requiring extensive contextual understanding.

Loading preview...

MCult01/glm-muse-v7b: An Efficiently Finetuned GLM-4 Model

MCult01/glm-muse-v7b is a 9 billion parameter language model, finetuned by MCult01 from the THUDM/GLM-4-9B-0414 base model. This iteration focuses on training efficiency and performance.

Key Characteristics

  • Base Model: Finetuned from THUDM/GLM-4-9B-0414, leveraging its robust architecture.
  • Training Efficiency: Achieved 2x faster training speeds through the integration of Unsloth and Huggingface's TRL library, indicating an optimized training pipeline.
  • Context Length: Supports a substantial context window of 32768 tokens, enabling it to process and generate longer sequences of text.

Good For

  • Applications requiring a capable 9B parameter model with a large context window.
  • Developers interested in models optimized for efficient finetuning processes.
  • Tasks benefiting from the GLM-4 architecture's strengths, combined with enhanced training speed.