hkshawn/72b

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Oct 29, 2024License:otherArchitecture:Transformer Warm

hkshawn/72b is a 72.7 billion parameter language model based on the Qwen2.5-72B-Instruct architecture. This model has been 'abliterated' using techniques from refusal_direction to specifically reduce admonition and moral appeal in its responses. It is designed for use cases where a more direct and less preachy output is desired, offering a distinct behavioral profile compared to its base model.

Loading preview...

hkshawn/72b: Abliterated Qwen2.5-72B-Instruct

hkshawn/72b is a 72.7 billion parameter model derived from the Qwen/Qwen2.5-72B-Instruct architecture. Its primary distinction lies in its "abliterated" nature, a process that specifically targets and reduces the model's tendency towards admonition and moral appeal in its generated text. This modification was achieved by utilizing code and techniques from the refusal_direction project.

Key Capabilities

  • Reduced Admonition: Engineered to provide responses with significantly less moralizing or preachy content.
  • Directness: Aims for more straightforward and less cautionary outputs compared to its base model.
  • Large Scale: Retains the robust capabilities of the 72.7B parameter Qwen2.5-Instruct model.

Good For

  • Applications requiring neutral or objective responses without unsolicited advice.
  • Use cases where a less 'helpful' or 'safe' but more direct tone is preferred.
  • Exploring the effects of 'abliteration' on large language models.

For more technical details on the abliteration technique, refer to this article. A GGUF version of this model is also available here.