huihui-ai/GLM-4-32B-0414-abliterated

TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kLicense:mitArchitecture:Transformer0.0K Open Weights Gated Cold

The huihui-ai/GLM-4-32B-0414-abliterated is a 32 billion parameter large language model, derived from THUDM/GLM-4-32B-0414. This model has been modified using an 'abliteration' technique to remove refusal behaviors, making it an uncensored version. It is designed for general text generation tasks where an unconstrained response is desired, offering a 32768 token context length.

Loading preview...

Overview

The huihui-ai/GLM-4-32B-0414-abliterated model is a 32 billion parameter large language model based on the original THUDM/GLM-4-32B-0414. Its primary distinction is the application of an "abliteration" technique, a proof-of-concept method to remove refusal behaviors from the LLM without using TransformerLens. This results in an uncensored version of the base model, providing more direct and unconstrained responses.

Key Capabilities

  • Uncensored Responses: Modified to remove refusal behaviors, offering direct answers.
  • Large Parameter Count: With 32 billion parameters, it supports complex language understanding and generation.
  • Extended Context Window: Features a 32768 token context length, suitable for processing longer inputs and maintaining conversational coherence.
  • Quantization Support: Demonstrates usage with 2-bit and 4-bit quantization configurations for efficient deployment.

Good For

  • Research into Model Alignment: Useful for studying the effects of refusal removal techniques.
  • Applications Requiring Unfiltered Content: Suitable for use cases where the base model's refusal mechanisms are undesirable.
  • General Text Generation: Capable of various language tasks, leveraging its large parameter count and context window.