zetasepic/Qwen2.5-72B-Instruct-abliterated
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Oct 1, 2024License:qwenArchitecture:Transformer0.0K Warm

zetasepic/Qwen2.5-72B-Instruct-abliterated is a 72.7 billion parameter instruction-tuned language model, based on the Qwen2.5-72B-Instruct architecture. This model has been 'abliterated' using a specific technique to modify its behavior, making it distinct from its base model. It is primarily designed for use cases where controlled or altered responses are desired, leveraging the refusal_direction method.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p