MCult01/glm-muse-feral

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:Apr 22, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

MCult01/glm-muse-feral is a 9 billion parameter language model developed by MCult01, finetuned from MCult01/glm-muse-v6. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x speedup during its finetuning process. It is designed for general language generation tasks, leveraging its efficient training methodology for practical deployment. The model operates with a 32768 token context length, making it suitable for handling extensive inputs.

Loading preview...

Overview

MCult01/glm-muse-feral is a 9 billion parameter language model developed by MCult01, finetuned from the MCult01/glm-muse-v6 base model. This iteration focuses on efficient training, utilizing Unsloth and Huggingface's TRL library to achieve a 2x speedup during its finetuning process. This optimization allows for faster iteration and deployment of the model.

Key Characteristics

  • Developer: MCult01
  • Base Model: Finetuned from MCult01/glm-muse-v6
  • Training Efficiency: Leverages Unsloth and Huggingface TRL for 2x faster finetuning.
  • License: Released under the Apache-2.0 license.

Good For

  • Developers seeking a 9B parameter model with an efficient training lineage.
  • Applications requiring a model that benefits from optimized finetuning processes.
  • General language generation tasks where a 32768 token context window is advantageous.