zetasepic/Qwen2.5-72B-Instruct-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Oct 1, 2024License:qwenArchitecture:Transformer0.0K Warm

zetasepic/Qwen2.5-72B-Instruct-abliterated is a 72.7 billion parameter instruction-tuned language model, based on the Qwen2.5-72B-Instruct architecture. This model has been 'abliterated' using a specific technique to modify its behavior, making it distinct from its base model. It is primarily designed for use cases where controlled or altered responses are desired, leveraging the refusal_direction method.

Loading preview...

Overview

zetasepic/Qwen2.5-72B-Instruct-abliterated is a 72.7 billion parameter instruction-tuned model derived from the Qwen2.5-72B-Instruct architecture. Its key differentiator is the application of an 'abliteration' technique, which modifies the model's inherent response patterns. This process utilizes code from the refusal_direction project, aiming to alter or control specific aspects of the model's output.

Key Capabilities

  • Modified Behavior: The model's responses are intentionally altered from the base Qwen2.5-72B-Instruct through the abliteration technique.
  • Large Scale: With 72.7 billion parameters, it retains the robust language understanding and generation capabilities of its base model.
  • Instruction Following: Designed to follow instructions effectively, similar to its parent model.

Good For

  • Research into Model Behavior Modification: Ideal for exploring the effects and applications of abliteration techniques on large language models.
  • Controlled Response Generation: Suitable for scenarios requiring specific alterations or constraints on model outputs.
  • Advanced LLM Experimentation: Developers and researchers interested in fine-grained control over model responses beyond standard fine-tuning.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p